Parallel and Distributed lec 7
Parallel and Distributed lec 7
Computing
COMP3139
Contents
• DSD
• Concurrency
• Concurrency mechanism in ds
• Pa ra llelis m
• Pa ra llelis m a lgo rit hm s
• Code examples
DISTRIBUTED SHARED DATA (DSD)
• Unified memory space, even though the data is physically distributed across
various locations.
3
CONCURRENCY
4
CONCURRENCY
• Managing multiple tasks that are in progress at the same time, even
if they are not executing at the same instant.
5
CONCURRENCY
Example Visualization:
Think of a single-lane road where
cars (tasks) take turns to move
forward. The lane is shared, but all
cars are making progress over time.
6
CONCURRENCY CONTROL
MECHANISMS IN
DISTRIBUTED SYSTEMS
• During the commit phase, conflicts are detected, and if conflicts occur,
• If conflicts are detected, the transaction is rolled back and retried with
a new snapshot.
8
SNAPSHOT ISOLATION
• Snapshot Isolation
• Snapshot Isolation ensures that each transaction sees a consistent
snapshot of the database at the start of the transaction.
• MVCC
• Multi-Version Concurrency Control maintains multiple versions of
data and allows transactions to proceed without acquiring locks
9
upfront.
MVCC
10
TIMESTAMP ORDERING
13
PESSIMISTIC CONCURRENCY
CONTROL (PCC):
• If the lock is not available, the transaction waits until the lock is
released.
14
TWO-PHASE LOCKING (2PL)
• Modify the same row until the lock is released, preventing conflicts.
15
STRICT TWO-PHASE LOCKING (STRICT
2PL)
• A variant of 2PL where all locks acquired during a transaction are held
until the transaction is committed or rolled back.
16
MULTIPLE GRANULARITY LOCKING
17
DISTRIBUTED LOCK MANAGER (DLM)
18
PARALLELISM
Parallelism
• Parallelism refers to the simultaneous execution of multiple tasks or
operations.
• Suitable for tasks that can be broken down into independent subtasks, such
as scientific simulations or large-scale data processing.
20
PARALLELISM
• Example Visualization:
• Think of a road with multiple lanes, and each car (task) travels on a
different lane simultaneously. This allows all cars to reach their
destination at the same time.
21
BIT-LEVEL PARALLELISM
• Increasing the word size reduces the number of instructions the processor must
execute to perform an operation on variables whose sizes are greater than the
length of the word
22
BIT-LEVEL PARALLELISM
• For example, where an 8-bit processor must add two 16-bit integers.
• The processor must first add the 8 lower-order bits from each integer using the
standard addition instruction.
• add the 8 higher-order bits using an add-with-carry instruction and the carry bit from
the lower order addition.
23
BIT-LEVEL PARALLELISM
24
INSTRUCTION-LEVEL PARALLELISM
• Instruction Level Parallelism (ILP) refers to the ability of a computer processor to execute
multiple instructions simultaneously within a single CPU cycle.
25
INSTRUCTION-LEVEL PARALLELISM
• Multiple instructions are processed in these stages simultaneously, allowing the CPU to
work on different parts of several instructions at once.
• For example, while one instruction is being fetched, another can be decoded, and yet another can
be executed. This overlapping increases throughput and improves processor efficiency.
26
INSTRUCTION-LEVEL PARALLELISM
27
INSTRUCTION-LEVEL PARALLELISM
5. Branch Prediction: When the CPU encounters a conditional instruction (like an if-else
statement),
• it predicts which path the program will take and prefetches instructions from that path to avoid
delays.
• This speculative execution further increases ILP.
28
INSTRUCTION-LEVEL PARALLELISM
Example code:
29
TASK PARALLELISM
• Potentially on different processors or cores. This approach contrasts with data parallelism,
which focuses on dividing data into chunks and processing each chunk concurrently.
• The individual units of work that can be executed independently
30
TASK PARALLELISM
31
SUPER WORD LEVEL PARALLELISM
32
SUPER WORD LEVEL PARALLELISM
• Super word Level Parallelism often relies on SIMD (Single Instruction, Multiple Data)
instructions,
• which allow a single instruction to operate on multiple data elements simultaneously.
SIMD instructions process multiple data items in parallel.
• Example: Consider a simple example where we need to add two arrays of integers.
Traditional scalar operations would involve adding each pair of integers in sequence.
33
SUPER WORD LEVEL PARALLELISM
• Example Code:
34
THANK YOU