0% found this document useful (0 votes)
4 views

chapter 1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

chapter 1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Subject Code:CSPC-43

Subject Name: Distributed Computing


Reference Books

✔ Distributed Computing: Principles, Algorithms, and Systems, by Ajay


Kshemkalyani & Mukesh Singhal.

✔ Distributed Systems: Concepts & Design, by Coulouris

✔ Elements of Distributed Computing, by V. K. Garg.

✔ Distributed Systems: An Algorithmic Approach, by Sukumar Ghos.


Distributed system
A distributed system is a collection of independent entities that cooperate to
solve a problem that cannot be individually solved.

Distributed systems have been in existence since the start of the universe.
For computing systems, a distributed system has
been characterized in one of several ways:
• A collection of computers that do not share common memory or a
common physical clock, that communicate by a messages passing over a
communication network, and where each computer has its own memory
and runs its Own operating system. Typically the computers are
semi-autonomous and are loosely coupled while they cooperate to address
a problem collectively.

• A collection of independent computers that appears to the users of the


system as a single coherent computer
A distributed system can be characterized as a collection of mostly
autonomous processors communicating over a communication network and
having the following features:
• No common physical clock This is an important assumption because it
introduces the element of “distribution” in the system and gives rise to the
inherent asynchrony amongst the processors.

• No shared memory This is a key feature that requires message-passing


for communication. This feature implies the absence of the common physical
clock. It may be noted that a distributed system may still provide the
abstraction of a common address space via the distributed shared memory
abstraction.
Distributed system features cont…..
• Geographical separation The geographically wider apart that the
processors are, the more representative is the system of a distributed
system. However, it is not necessary for the processors to be on a
wide-area network (WAN).

• Recently, the network/cluster of workstations


(NOW/COW)configuration connecting processors on a LAN is also
being increasingly regarded as a small distributed system.
Distributed system features cont…..
• Autonomy and heterogeneity The processors are “loosely coupled”
in that they have different speeds and each can be running a different
operating system.
They are usually not part of a dedicated system, but cooperate with one
another by offering services or solving a problem jointly.
A typical distributed system is shown in Figure. Each computer has a
memory-processing unit and the computers are connected by a
communication network.

A Distributed System connects processors by a


communication network
Distributed computation

• A distributed execution is the execution of processes across the distributed


system to collaboratively achieve a common goal. An execution is also
sometimes termed a computation
Figure shows the relationships of the software components that
run on each of the computers and use the local operating system and network
protocol stack for functioning

Interaction of the software component at


each processor
Motivation
• The motivation for using a distributed system is some or all of the following
requirements:

1. Inherently distributed computations In many applications such as money


transfer in banking, or reaching consensus among parties that are
geographically distant, the computation is inherently distributed.
2. Resource sharing Resources such as peripherals, complete data sets in
databases, special libraries, as well as data (variable/files) cannot be fully
replicated at all the sites because it is often neither practical nor cost-effective.
Further, they cannot be placed at a single site because access to that site might
prove to be a bottleneck. Therefore, such resources are typically distributed
across the system.

3. Access to geographically remote data and resources In many scenarios,


the data cannot be replicated at every site participating in the distributed
execution because it may be too large or too sensitive to be replicated.
4.reliability A distributed system has the inherent potential to provide
increased reliability because of the possibility of replicating resources and
executions, as well as the reality that geographically distributed resources are
not likely to crash/malfunction at the same time under normal circumstances.
Reliability entails several aspects:
• availability, i.e., the resource should be accessible at all times;
• integrity, i.e., the value/state of the resource should be correct, in the
face of concurrent access from multiple processors, as per the semantics
expected by the application;
• fault-tolerance, i.e., the ability to recover from system failures,
5. Increased performance/cost ratio By resource sharing and accessing
geographically remote data and resources, the performance/cost ratio is
increased.

Although higher throughput has not necessarily been the main objective behind
using a distributed system, nevertheless, any task can be partitioned across the
various computers in the distributed system. Such a configuration provides a
better performance/cost ratio than using special parallel machines.
6. Scalability As the processors are usually connected by a wide-area network,
adding more processors does not pose a direct bottleneck for the
Communication network.

7. Modularity and incremental expandability Heterogeneous processors may


be easily added into the system without affecting the performance, as long as
those processors are running the same middleware algorithms. Similarly,
existing processors may be easily replaced by other processors.
Message-passing systems versus shared
memory systems
• Shared memory systems are those in which there is a (common) shared
address space throughout the system. Communication among processors
takes place via shared data variables, and control variables for
synchronization among the processors.
• Semaphores and monitors that were originally designed for shared
memory uniprocessors and multiprocessors are examples of how
synchronization can be achieved in shared memory systems.
• All multicomputer (NUMA as well as message-passing) systems that do
not have a shared address space provided by the underlying architecture
and hardware necessarily communicate by message passing.
Synchronous versus asynchronous executions
• An asynchronous execution is an execution in which

(i)there is no processor synchrony and there is no bound processor clocks,

(ii) message delays (transmission + propagation times) are finite but


unbounded,

(iii) there is no upper bound on the time taken by a process to execute a step.
Synchronous versus asynchronous executions
Cont…..
An example asynchronous execution with four processes P0 to P3 is shown in
Figure
The arrows denote the messages; the tail and head of an arrow mark the send and
receive event for that message, denoted by a circle and vertical line, respectively.
Non-communication events, also termed as internal events, are shown by shaded
circles.
An example of asynchronous execution in
Message passing system. A timing diagram is
used to illustrate the execution.
Synchronous versus asynchronous executions
Cont…..
• A synchronous execution is an execution in which
(i) processors are synchronized and the clock rate between any two
processors is bounded,
(ii) message delivery (transmission + delivery) times are such that they occur
in one logical step or round, and
(iii) there is a known upper bound on the time taken by a process to execute a
step.
Synchronous versus asynchronous executions
Cont…..
• An Example of Synchronous execution
• An example of a synchronous execution with
four processes P0 to P3 is shown in Figure
The arrows denote the messages.

An example of synchronous execution in message


passing system.
A timing diagram is used to illustrate the execution. All the
messages sent in round are received within that same round

You might also like