0% found this document useful (0 votes)
4 views

Unit 4_Concurrency Control

Uploaded by

omvati343
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit 4_Concurrency Control

Uploaded by

omvati343
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Concurrency control

From Wikipedia, the free encyclopedia


Jump to: navigation, search

In computer science, especially in the fields of computer programming (see also


concurrent programming, parallel programming), operating systems (see also parallel
computing), multiprocessors, and databases, concurrency control ensures that correct
results for concurrent operations are generated, while getting those results as quickly as
possible.

Computer systems, both software and hardware, consist of modules, or components. Each
component is designed to operate correctly, i.e., to obey to or meet certain consistency
rules. When components that operate concurrently interact by messaging or by sharing
accessed data (in memory or storage), a certain component's consistency may be violated
by another component. The general area of concurrency control provides rules, methods,
design methodologies, and theories to maintain the consistency of components operating
concurrently while interacting, and thus the consistency and correctness of the whole
system. Introducing concurrency control into a system means applying operation
constraints which typically result in some performance reduction. Operation consistency
and correctness should be achieved while still maintaining reasonable operation
efficiency.

Contents
[hide]

• 1 Concurrency control in databases


o 1.1 Transaction ACID rules
o 1.2 Why is concurrency control needed?
o 1.3 Concurrency control mechanisms
o 1.4 See also
o 1.5 References
• 2 Concurrency control in operating systems
o 2.1 See also

o 2.2 References

[edit] Concurrency control in databases


Concurrency control in Database management systems (DBMS; Bernstein et al. 1987,
Weikum and Vossen 2001), other transactional objects, and related distributed
applications (e.g., Grid computing and Cloud computing) ensures that database
transactions are performed concurrently without violating the data integrity of the
respective databases. Thus concurrency control is an essential element for correctness in
any system where two database transactions or more can access the same data
concurrently, e.g., virtually in any general-purpose database system. A well established
concurrency control theory exists for database systems: serializability theory, which
allows to effectively design and analyze concurrency control methods and mechanisms.

To ensure correctness, A DBMS usually guarantees that only serializable transaction


schedules are generated, unless serializability is intentionally relaxed. For maintaining
correctness in cases of failed (aborted) transactions (which can always happen for many
reasons) schedules also need to have the recoverability property. A DBMS also
guarantees that no effect of committed transactions is lost, and no effect of aborted
(rolled back) transactions remains in the related database. Overall transaction
characterization is usually summarized by the following ACID rules:

[edit] Transaction ACID rules

Main article: ACID

Every database transaction must obey the following rules:

• Atomicity - Either the effects of all or none of its operations remain when a
transaction is completed (committed or aborted respectively). In other words, to
the outside world a committed transaction appears to be indivisible, atomic. A
transaction is an indivisible unit of work that is either performed in its entirety or
not performed at all ("all or nothing" semantics).
• Consistency - Every transaction must leave the database in a consistent state, i.e.,
maintain the predetermined integrity rules of the database (constraints upon and
among the database's objects). A transaction must transform a database from one
consistent state to another consistent state.
• Isolation - Transactions cannot interfere with each other. Moreover, an
incomplete transaction is not visible to another transaction. Providing isolation is
the main goal of concurrency control.
• Durability - Effects of successful (committed) transactions must persist through
crashes.

[edit] Why is concurrency control needed?

If transactions are executed serially, i.e., sequentially with no overlap in time, no


transaction concurrency exists. However, if concurrent transactions with interleaving
operations are allowed in an uncontrolled manner, some unexpected result may occur.
Here are some typical examples:

1. Lost update problem: A second transaction writes a second value of a data-item


(datum) on top of a first value written by a first concurrent transaction, and the
first value is lost to other transactions running concurrently which need, by their
precedence, to read the first value. The transactions that have read the wrong
value end with incorrect results.
2. The dirty read problem: Transactions read a value written by a transaction that has
been later aborted. This value disappears from the database upon abort, and
should not have been read by any transaction ("dirty read"). The reading
transactions end with incorrect results.
3. The incorrect summary problem: While one transaction takes a summary over
values of a repeated data-item, a second transaction updated some instances of
that data-item. The resulting summary does not reflect a correct result for any
(usually needed for correctness) precedence order between the two transactions (if
one is executed before the other), but rather some random result, depending on the
timing of the updates, and whether a certain update result has been included in the
summary or not.

[edit] Concurrency control mechanisms

The main categories of concurrency control mechanisms are:

• Optimistic - Delay the checking of whether a transaction meets the isolation and
other integrity rules (e.g., serializability and recoverability) until its end, without
blocking any of its (read, write) operations, and then abort a transaction if the
desired rules are violated.
• Pessimistic - Block an operation of a transaction if it may cause violation of the
rules until the possibility of violation disappears.
• Semi-optimistic - Block operations in some situations, if they may cause
violation of some rules, and do not block in other situations while delaying rules
checking to transaction's end, as done with optimistic.

Many methods for concurrency control exist. Most of them can be implemented within
either main category above. Major methods, which have each many variants, and in some
cases may overlap or be combined, include:

• Two phase locking (2PL)


• Serialization (or Serializability, or Conflict, or Precedence) graph checking
• Timestamp ordering (TO)
• Commitment ordering (or Commit ordering; CO)
• Multiversion concurrency control (MVCC)
• Index concurrency control (for synchronizing access operations to indexes, rather
than to user data)

A common major goal of concurrency control is generating schedules with the


Serializability property. Serializability of a schedule means equivalence to some serial
schedule with the same transactions (i.e., in which transactions are sequential with no
overlap in time). Serializability is considered the highest level of isolation between
database transactions, and the major correctness criterion for concurrent transactions. In
some cases relaxed forms of serializability are allowed for better performance, if
application's correctness is not violated by the relaxation.

Almost all implemented concurrency control mechanisms achieve serializability by


providing Conflict serializablity, a broad special case of serializability which can be
implemented effectively.

Concurrency control also ensures the Recoverability property for maintaining correctness
in cases of aborted transactions (which can always happen for many reasons).
Recoverability means that committed transactions have not read data written by aborted
transactions. None of the mechanisms above provides recoverability in its general form,
and special considerations and mechanism enhancements are needed to support
recoverability. A commonly utilized special case of recoverability is Strictness, which
allows efficient database recovery from failure, but excludes optimistic implementations
(the term "semi-optimistic" appeared for the first time in conjunction with Strict CO
(SCO)).

As database systems become distributed, or cooperate in distributed environments (e.g.,


in Grid computing and Cloud computing), a need exists for distributed concurrency
control mechanisms. Achieving Distributed serializability and Global serializability
effectively poses special challenges typically not met by most local serializability
mechanisms (especially due to a need in costly distribution of concurrency control
information). The Commitment ordering (CO; Raz 1992) technique provides a general
effective solution for both distributed and global serializability, also in a heterogeneous
environment with different concurrency control mechanisms.

You might also like