Intro to DS Chapter 5
Intro to DS Chapter 5
Conc
a valid sequence of events of FIFO consistency, but not with others discussed so far
• FIFO consistency is easy to implement; tag each write
operation with a (process, sequence number) pair, and
perform writes per process in the order of their sequence
number
26
• consider the following three processes again
Process P1 Process P2 Process P3
x = 1; y = 1; z = 1;
print (y, z); print (x, z); print (x, y);
x = 1; x = 1; y = 1;
print (y, z); y = 1; print (x, z);
y = 1; print (x, z); z = 1;
print (x, z); print (y, z); print (x, y);
z = 1; z = 1; x = 1;
print (x, y); print (x, y); print (y, z);
28
Models with synchronization operations
6. Weak Consistency
• FIFO consistency is still unnecessarily restrictive for many
applications; it requires that writes originating in a single process
be seen everywhere in order
• not all applications require even seeing all writes, let alone
seeing them in order
• for example, there is no need to worry about intermediate results
in a critical section since other processes will not see the data
until it leaves the critical section; only the final result need to be
seen by other processes
• this can be done by a synchronization variable, S, that has
only a single associated operation synchronize(S), which
synchronizes all local copies of the data store
• a process performs operations only on its locally available copy
of the store
• when the data store is synchronized, all local writes by process P
are propagated to the other copies and writes by other processes
are brought in to P’s copy
this leads to weak consistency models which have three
properties
1. Accesses to synchronization variables associated with a data
store are sequentially consistent (all processes see all
operations on synchronization variables in the same order)
2. No operation on a synchronization variable is allowed to be
performed until all previous writes have been completed
everywhere (synchronization flushes the pipeline: all
partially completed - or in progress - writes are guaranteed
to be completed when the synchronization is done)
3. No read or write operation on data items are allowed to be
performed until all previous operations to synchronization
variables have been performed (when a process accesses a
data item (for reading or writing) all previous
synchronization will have been completed; by doing a
synchronization a process can be sure of getting the most
recent values)
• weak consistency enforces consistency on a group of
operations, not on individual reads and writes
• e.g., S stands for synchronizes; it means that a local copy of a
data store is brought up to date
31
7. Release Consistency
• with weak consistency model, when a synchronization variable is
accessed, the data store does not know whether it is done
because the process has finished writing the shared data or is
about to start reading
• if we can separate the two (entering a critical section and leaving
it), a more efficient implementation might be possible
• the idea is to selectively guard shared data; the shared data that
are kept consistent are said to be protected
• release consistency provides mechanisms to separate the two
kinds of operations or synchronization variables
• an acquire operation is used to tell that a critical region is
about to be entered
• a release operation is used to tell that a critical region has just
been exited
32
• when a process does an acquire, the store will ensure that all
copies of the protected data are brought up to date to be
consistent with the remote ones; does not guarantee that
locally made changes will be sent to other local copies
immediately
• when a release is done, protected data that have been changed
are propagated out to other local copies of the store; it does
not necessarily import changes from other copies
34
• implementation algorithm (eager release consistency)
• to do an acquire, a process sends a message to a central
synchronization manager requesting an acquire on a particular lock
• if there is no competition, the request is granted
• then, the process does reads and writes on the shared data, locally
• when the release is done, the modified data are sent to the other
copies that use them
• after each copy has acknowledged receipt of the data, the
synchronization manager is informed of the release
• but may be not all processes need to see the new changes
• a variant is the lazy release consistency
• at the time of release, nothing is sent anywhere
• instead, when an acquire is done, the process trying to do an acquire
has to get the most recent values of the data
• this avoids sending values to processes that don’t need them thereby
reducing wastage of bandwidth
8. Entry Consistency
• like release consistency, it requires an acquire and release to be
used at the start and end of a critical section
• however, it requires that each ordinary shared data item to be
associated with some synchronization variable such as a lock
• if it is desired that elements of an array be accessed
independently in parallel, then different array elements may be
associated with different locks
• synchronization variable ownership
• each synchronization variable has a current owner, the
process that acquired it last
• the owner may enter and exit critical sections repeatedly
without sending messages
• other processes must send a message to the current owner
asking for ownership and the current values of the data
associated with that synchronization variable
• several processes can also simultaneously own a
synchronization variable, but only for reading
• a data store exhibits entry consistency if it meets all the following
conditions:
• An acquire access of a synchronization variable is not allowed to
perform with respect to a process until all updates to the
guarded shared data have been performed with respect to that
process. (at an acquire, all remote changes to the guarded data
must be made visible)
• Before an exclusive mode access to a synchronization variable
by a process is allowed to perform with respect to that process,
no other process may hold the synchronization variable, not
even in nonexclusive mode.
• After an exclusive mode access to a synchronization variable has
been performed, any other process's next nonexclusive mode
access to that synchronization variable may not be performed
until it has performed with respect to that variable's owner. (it
must first fetch the most recent copies of the guarded shared
data)
a valid event sequence for entry consistency
c.Epidemic Protocols
update propagation in eventual consistency is often
implemented by a class of algorithms known as epidemic
protocols
updates are aggregated into a single message and then
exchanged between two servers
5. Consistency Protocols
so far we have concentrated on various consistency
models and general design issues
consistency protocols describe an implementation of a
specific consistency model
there are three types
primary-based protocols
remote-write protocols
local-write protocols
replicated-write protocols
active replication
quorum-based protocols
cache-coherence protocols
1. Primary-Based Protocols
each data item x in the data store has an associated
primary, which is responsible for coordinating write
operations on x
two approaches: remote-write protocols, and local-write
protocols
a. Remote-Write Protocols
all read and write operations are carried out at a
(remote) single server; in effect, data are not
replicated; traditionally used in client-server systems,
where the server may possibly be distributed
primary-based remote-write protocol with a fixed server to which all read and write
operations are forwarded
another approach is primary-backup protocols where reads
can be made from local backup servers while writes should
be made directly on the primary server
the backup servers are updated each time the primary is
updated
b.Local-Write Protocols
two approaches
i. there is a single copy; no replicas
when a process wants to perform an operation on some
data item, the single copy of the data item is transferred
to the process, after which the operation is performed
primary-based local-write protocol in which a single copy is migrated between
processes