0% found this document useful (0 votes)
5 views

unit 4 ADBMS

The document discusses nested transactions in database management systems (DBMS), explaining their rules and how they allow independent commits and aborts of operations within a larger transaction. It also outlines the ACID properties—Atomicity, Consistency, Isolation, and Durability—that ensure data integrity during transactions. Additionally, it covers transaction states, Transaction Processing Monitors (TPM), and levels of consistency, highlighting the importance of these concepts in maintaining reliable database operations.

Uploaded by

Rohit Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

unit 4 ADBMS

The document discusses nested transactions in database management systems (DBMS), explaining their rules and how they allow independent commits and aborts of operations within a larger transaction. It also outlines the ACID properties—Atomicity, Consistency, Isolation, and Durability—that ensure data integrity during transactions. Additionally, it covers transaction states, Transaction Processing Monitors (TPM), and levels of consistency, highlighting the importance of these concepts in maintaining reliable database operations.

Uploaded by

Rohit Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

RSR RUNGTA COLLEGE OF ENGINEERING AND TECHNOLOGY,

BHILAI

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Subject Notes Subject Name: ADBMS

Course/Semester: MCA-I

UNIT-1

Nested Transactions

A nested transaction is used to provide a transactional guarantee for a subset of operations


performed within the scope of a larger transaction. Doing this allows you to commit and abort
the subset of operations independently of the larger transaction.

The rules to the usage of a nested transaction are as follows:

 While the nested (child) transaction is active, the parent transaction may not perform
any operations other than to commit or abort, or to create more child transactions.
 Committing a nested transaction has no effect on the state of the parent transaction.
The parent transaction is still uncommitted. However, the parent transaction can now
see any modifications made by the child transaction. Those modifications, of course,
are still hidden to all other transactions until the parent also commits.
 Likewise, aborting the nested transaction has no effect on the state of the parent
transaction. The only result of the abort is that neither the parent nor any other
transactions will see any of the container modifications performed under the
protection of the nested transaction.
 If the parent transaction commits or aborts while it has active children, the child
transactions are resolved in the same way as the parent. That is, if the parent aborts,
then the child transactions abort as well. If the parent commits, then whatever
modifications have been performed by the child transactions are also committed.
 The locks held by a nested transaction are not released when that transaction commits.
Rather, they are now held by the parent transaction until such a time as that parent
commits.
 Any container modifications performed by the nested transaction are not visible
outside of the larger encompassing transaction until such a time as that parent
transaction is committed.
 The depth of the nesting that you can achieve with nested transaction is limited only
by memory.

To create a nested transaction, use the XmlManager::createTransaction method, but pass it


the internal Berkeley DB Transaction object as an argument. For example:

// parent transaction
XmlTransaction parentTxn = myManager.createTransaction();
// child transaction
XmlTransaction childTxn =
XmlManager.createTransaction(parentTxn.getTransaction(), null);

ACID Properties in DBMS

DBMS is the management of data that should remain integrated when any changes are done in
it. It is because if the integrity of the data is affected, whole data will get disturbed and
corrupted. Therefore, to maintain the integrity of the data, there are four properties described
in the database management system, which are known as the ACID properties. The ACID
properties are meant for the transaction that goes through a different group of tasks, and there
we come to see the role of the ACID properties.

In this section, we will learn and understand about the ACID properties. We will learn what
these properties stand for and what does each property is used for. We will also understand the
ACID properties with the help of some examples.

ACID Properties

The expansion of the term ACID defines for:


1) Atomicity

The term atomicity defines that the data remains atomic. It means if any operation is performed
on the data, either it should be performed or executed completely or should not be executed at
all. It further means that the operation should not break in between or execute partially. In the
case of executing operations on the transaction, the operation should be completely executed
and not partially.

Example: If Remo has account A having $30 in his account from which he wishes to send $10
to Sheero's account, which is B. In account B, a sum of $ 100 is already present. When $10
will be transferred to account B, the sum will become $110. Now, there will be two operations
that will take place. One is the amount of $10 that Remo wants to transfer will be debited from
his account A, and the same amount will get credited to account B, i.e., into Sheero's account.
Now, what happens - the first operation of debit executes successfully, but the credit operation,
however, fails. Thus, in Remo's account A, the value becomes $20, and to that of Sheero's
account, it remains $100 as it was previously present.

In the above diagram, it can be seen that after crediting $10, the amount is still $100 in account
B. So, it is not an atomic transaction.

The below image shows that both debit and credit operations are done successfully. Thus the
transaction is atomic.
Thus, when the amount loses atomicity, then in the bank systems, this becomes a huge issue,
and so the atomicity is the main focus in the bank systems.

2) Consistency

The word consistency means that the value should remain preserved always. In DBMS, the
integrity of the data should be maintained, which means if a change in the database is made, it
should remain preserved always. In the case of transactions, the integrity of the data is very
essential so that the database remains consistent before and after the transaction. The data
should always be correct.

Example:

In the above figure, there are three accounts, A, B, and C, where A is making a transaction T
one by one to both B & C. There are two operations that take place, i.e., Debit and Credit.
Account A firstly debits $50 to account B, and the amount in account A is read $300 by B
before the transaction. After the successful transaction T, the available amount in B becomes
$150. Now, A debits $20 to account C, and that time, the value read by C is $250 (that is correct
as a debit of $50 has been successfully done to B). The debit and credit operation from account
A to C has been done successfully. We can see that the transaction is done successfully, and
the value is also read correctly. Thus, the data is consistent. In case the value read by B and C
is $300, which means that data is inconsistent because when the debit operation executes, it
will not be consistent.

3) Isolation

The term 'isolation' means separation. In DBMS, Isolation is the property of a database where
no data should affect the other one and may occur concurrently. In short, the operation on one
database should begin when the operation on the first database gets complete. It means if two
operations are being performed on two different databases, they may not affect the value of
one another. In the case of transactions, when two or more transactions occur simultaneously,
the consistency should remain maintained. Any changes that occur in any particular transaction
will not be seen by other transactions until the change is not committed in the memory.

Example: If two operations are concurrently running on two different accounts, then the value
of both accounts should not get affected. The value should remain persistent. As you can see
in the below diagram, account A is making T1 and T2 transactions to account B and C, but
both are executing independently without affecting each other. It is known as Isolation.

4) Durability

Durability ensures the permanency of something. In DBMS, the term durability ensures that
the data after the successful execution of the operation becomes permanent in the database. The
durability of the data should be so perfect that even if the system fails or leads to a crash, the
database still survives. However, if gets lost, it becomes the responsibility of the recovery
manager for ensuring the durability of the database. For committing the values, the COMMIT
command must be used every time we make changes.

Therefore, the ACID property of DBMS plays a vital role in maintaining the consistency and
availability of data in the database.

Transaction States in DBMS


States through which a transaction goes during its lifetime. These are the states which tell
about the current state of the Transaction and also tell how we will further do the processing
in the transactions. These states govern the rules which decide the fate of the transaction
whether it will commit or abort.
They also use Transaction log. Transaction log is a file maintain by recovery management
component to record all the activities of the transaction. After commit is done transaction log
file is removed.

These are different types of Transaction States :

1. Active State –
When the instructions of the transaction are running then the transaction is in
active state. If all the ‘read and write’ operations are performed without any error
then it goes to the “partially committed state”; if any instruction fails, it goes to
the “failed state”.

2. Partially Committed –
After completion of all the read and write operation the changes are made in main
memory or local buffer. If the changes are made permanent on the DataBase then
the state will change to “committed state” and in case of failure it will go to the
“failed state”.

3. Failed State –
When any instruction of the transaction fails, it goes to the “failed state” or if
failure occurs in making a permanent change of data on Data Base.

4. Aborted State –
After having any type of failure the transaction goes from “failed state” to
“aborted state” and since in previous states, the changes are only made to local
buffer or main memory and hence these changes are deleted or rolled-back.

5. Committed State –
It is the state when the changes are made permanent on the Data Base and the
transaction is complete and therefore terminated in the “terminated state”.
6. Terminated State –
If there isn’t any roll-back or the transaction comes from the “committed state”,
then the system is consistent and ready for new transaction and the old transaction
is terminated.

What is Transaction Processing Monitors (TPM)?

Transaction Processing Monitors are systems designed and developed in the 1970s and
1980s to support a large number of airline terminals from a single system or computer. It was
developed for building complex transaction processing systems with a large number of clients
and servers.
Transaction Processing Monitors acts as middlewares (middleware is software that helps and
bridges a variety of communication/connectivity between two or more applications) its main
task is to support and handle interactions between applications on a variety of computer
platforms.
Transaction Processing Monitors is also usually known as TP-monitors which provides
functionalities such as managing, deploying, and developing transactional distributed
information systems. It controls programs that monitor or manage a transaction of data as it
passes from one stage in a process to another in an organized transaction-oriented manner.
A transaction monitor can be used in various system components such as communication
systems, and operating systems for transaction-protected applications. It provides an
operating system on top of the existing operating system that connects thousands of
computers with a pool of shared server processes in real-time.
One of the oldest forms of middleware is from IBM which was made to provide rich runt ime
environments for online Transaction Processing applications. Then newer versions of tp
monitors came and made work on client-server based they were best at that time and they are
still relevant in today’s world such as processing of banking transactions.
Features:
 It provides the ease to create user interfaces.
 It unwraps the incoming content/data into data packets.
 It provides a continuous row/queue of client requests and responses from the
server.
 It routes the client data to servers.
 It gives secure returns to services.
 It hides inner transmission details from programmers.
 Helps in maintaining the load of the program.
Working:
Working of TPM

 Incoming messages go from the queue manager or input queue as shown in the
diagram above.
 The durability of the lineup of outgoing information is crucial thus the
application server sends a confirmation message to the output queue as part of a
transaction.
 Once the transaction completes, the TP monitor guarantees message is perfectly
delivered.
 Many TP monitors have locking and recovery facilities to let application servers
implement ACID properties by themselves.
Components:
 Monitor: It provides a rendering environment that helps in providing security
and fast response time to the system. It is comprised of different tools for the
installation of components.
 Communication services: This includes protocols and mechanisms to the
system that helps in continuing messages and peer-to-peer communication.
 Transactional RPC: Provides a basic relational mechanism.
 Transactional services: It provides support for concurrency control, recovery,
and transactional programming.
Functions:
 Message Manager
 Request control
 Application Servers
 Process Management
 Inter-Process Communication
 Queuing communications
 System Management and Recovery
 TP monitors also can perform system management functions related to
accounting and security
Benefits:
 TP-Monitors acts as a development platform for applications that run on
multiple computers.
 Its software allows the running of TP programs.
 Lines up as a model of middleware in itself (i.e. Message Oriented Middleware)
 Helps to tackle the cycle of events at once without any interruption
 Helps in providing the interface between the input device and DBMS systems.
 It provides user data security and establishes the transactions in a way that they
do not get manipulated.

Weak Levels of Consistency

Each transaction in a Database Management System must necessarily follow ACID


properties, essentially required to maintain an appropriate mechanism to execute a sequence
of basic operations or a function on a database without violating any constraints to produce
a correct and consistent output at completion. An important property that forms an essential
part of the set of properties for which ACID is being used as an acronym is consistency,
represented by ‘C’ in the acronym for a Non-Distributed Database System. For further
discussion, note that we discuss the concept of Consistency Levels for a Distributed
Database System and hence follow CAP Theorem.
Here, it is important to consider that the context in which consistency is being used is not
related to ‘C’ in the acronym ACID but is a property of the Distributed Database System and
is discussed in context for ‘C’ in the acronym for CAP, as used in CAP Theorem. Here,
the consistency, as in ‘C’ of CAP implies the rules defined for executing a concurrent and a
distributed system enact and function as a single-threaded and a centralised database system
for a user. As we discuss concurrent threads, any transaction, if exists as a part of a thread
must adhere to ACID properties within itself.
Consistency is hence a property that ensures that if there exists any copy of the database, it
must necessarily perform a series of operations in a specified order to maintain a general final
state in one and all copies of the database as well as the original database itself.
1. Here, all the READ operations at any particular instant can have a single possible
output and hence must return the same value back. This must follow for thread-
based concurrent execution in a distributed database system.
2. Each READ must reflect the most recently done WRITE of that data item.
3. The condition specified above is irrespective of the fact that which server had
processed that particular WRITE.
4. Also, even though, READ and WRITE operations are being executed at distinct
nodes of a distributed system, it does not restrict the output and moreover creates
a need for an additional responsibility to maintain global records for the order of
execution of READ and WRITE for any variable.
5. The procedure facilitates the nodes to exchange information and present a similar
order of output to ensure consistency.
This is the definition for almost perfect consistency, which is also termed as Atomic
Consistency. Also, absolute perfect consistency is termed as Strict Consistency but is
impractical to implement and hence is reduced to a theoretical basis only. Both of the levels
are discussed in detail below.
We begin with discussing the basic definition of Consistency and a Consistency Level.
Further, we will enlist distinct levels of consistency, categorization of each level from the
strongest level to the weakest level, and conclude with a brief discussion on Degree-Two
Consistency and Cursor Stability.
Difference between Strong Consistency and Weak Consistency:
Strong Consistency Weak Consistency
S.No

It allows distinct views of the


The current state of a database follows a database state to see different
universally and mutually accepted sequence of and unmatched updates in the
change of state. database state.
1.

Strict Consistency, Atomic Consistency, and Causal Consistency, Eventual


Sequential Consistency are stronger levels of Consistency is weaker levels
consistency. of Consistency
2.

The application developer


must be explicitly in the
knowledge of the replicated
End-User is unaware of replications of the given nature of data items in the
database. database.
3.

4. The user has a view of the database as The developer must adapt to the replicated
there exists only one copy of the database nature of the data in the database which results
that continuously reflects each of the state in an increase in the complexity of
transitions in a forward direction along development as compared to strong
with the operations. consistency.

Consistency Levels:

Various Isolation Levels offer a certain ability to provide a specific degree of isolation to a
transaction, and a chosen level also significantly affects the performance of the database.
Similarly, in the context of an ACID-based database system, a vast majority of Database
Management Systems consider offering a user, an entire range of levels to choose a specific
level for consistency, in accordance with the need of an application. Consistency adds to the
correctness in return for a reduction in the best performance or CPU utilization, which results
in a high throughput of the system.
The purpose for the development of distinct levels of consistency is to specify a procedure to
avoid any conflicts due to a number of individual but concurrent threads which might be
accessing a shared memory space and hence the data. Here we focus to understand the basics
through only READ and WRITE operations over individual data items.
Multilevel Transactions

Multilevel transactions are a variant of nested transactions where nodes in a transaction tree
correspond to executions of operations at particular levels of abstraction in a layered system
architecture. The edges in a tree represent the implementation of an operation by a sequence
(or partial ordering) or operations at the next lower level. An example instantiation of this
model are transactions with record and index-key accesses as high-level operations which are
in turn implemented by reads and writes of database pages as low-level operations. The
model allows reasoning about the correctness of concurrent executions at different levels,
aiming for serializability at the top level: equivalence to a sequential execution of the
transaction roots. This way, semantic properties of operations, like different forms of
commutativity, can be exploited for higher concurrency, and correctness proofs for the
corresponding...

TRANSACTIONAL WORKFLOWS

• Workflows are activities that involve the coordinated execution of multiple tasks performed
by different processing entities.

• With the growth of networks, and the existence of multiple autonomous database systems,
workflows provide a convenient way of carrying out tasks that involve multiple systems.

• Example of a workflow delivery of an email message, which goes through several mails
systems to reach destination. – Each mailer performs a tasks: forwarding of the mail to the
next mailer. – If a mailer cannot deliver mail, failure must be handled semantically (delivery
failure message).

• Workflows usually involve humans: e.g. loan processing, or purchase order processing.

LOAN PROCESSING WORKFLOW

• In the past, workflows were handled by creating and forwarding paper forms

• Computerized workflows aim to automate many of the tasks. But the humans still play role
e.g. in approving loans.
• To automate the tasks involved in Loan Processing –We can store the Loan application and
associated information in a database.

• Workflow itself then involves handling responsibility from one human to next.

• Workflows are very important in organizations and organizations today have multiple
software systems that need to work together. – Employee joins an organization and
information about the employee is provided to

• Payroll System.

• Library System.

• Authentication Systems.

We need to address two activities in general to automate a workflow: – Workflow


Specification. – Workflow Execution.

• Both activities are complicated by the fact that many organizations use several
independently managed information processing systems, that in most cases were developed
separately to automate different functions.

• Workflow activities may require interactions among several such systems each performing
a task as well as interactions with humans.

• A task may use parameters stored in its variables, may retrieve and update data in the local
system, may store its results in its output variables.

• At any time during the execution, the workflow state consists of the collection of states of
the workflows constituent tasks and the states of all variables in the workflow specification.

• The coordination of tasks can be specified either statically or dynamically. – A static


specification defines the tasks and dependencies among them, before the execution of
workflow begins. – A dynamic specification defines the dependencies and execution of tasks
on demand and along the route of execution itself. STATIC SPECI

You might also like