0% found this document useful (0 votes)
6 views

Week_2

The document outlines the principles of parallel and distributed computing, including concepts such as concurrency, parallelism, and the significance of multi-core processors. It discusses various hardware architectures, the role of synchronous and asynchronous programming, and the implications of Moore's Law on computing performance. Additionally, it highlights the necessity of parallel computing for solving large problems and achieving faster results in modern computing environments.

Uploaded by

malikayan575
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Week_2

The document outlines the principles of parallel and distributed computing, including concepts such as concurrency, parallelism, and the significance of multi-core processors. It discusses various hardware architectures, the role of synchronous and asynchronous programming, and the implications of Moore's Law on computing performance. Additionally, it highlights the necessity of parallel computing for solving large problems and achieving faster results in modern computing environments.

Uploaded by

malikayan575
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Parallel Computing

Landscape
(CS 526)

Junaid Asghar | Lecturer


Department of Computer Science,
The University of Lahore,
BSCS (hons.)
UET Lahore

MSCS
UMT Lahore

Email: [email protected]
Counselling Hours: 2.30 to 3.30 pm Friday
Outline
• Introduction of parallel and distributed computing
• Moore’s Law
• Multi-cores
• Moore’ Law in the era of multi-cores
• Flynn’s Taxonomy: SISD, SIMD, MISD, MIMD
• Hardware Architectures:
– Multi-core processors (Heterogeneous multi-cores i.e., GPUs)
– Multi-processors System (shared Memory)
– Multi-processor System (Distributed Memory)
• Parallel Architectures: Classification of MIMD (NUMA, UMA,
COMA, CC-NUMA)
– Cluster
– Grid Computing
– Cloud and Fog Computing
Concurrency & Parallelism

Concurrency
Consider you are given a task of singing and eating at the same time. At
a given instance of time either you would sing or you would eat as in
both cases your mouth is involved. So in order to do this, you would eat
for some time and then sing and repeat this until your food is finished
or song is over. So you performed your tasks concurrently.

Sing Eat Sing Eat Sing Eat


• Concurrency means executing multiple tasks at the same time but not
necessarily simultaneously. In a concurrent application, two tasks can
start, run, and complete in overlapping time periods i.e Task-2 can
start even before Task-1 gets completed.
• In the computer science world, the way how concurrency is achieved
in various processors is different. In a single core environment (i.e
your processor is having a single core), concurrency is achieved via a
process called Context Switching. If its a multi-core environment,
concurrency can be achieved through parallelism.
Parallelism
• Consider you are given two tasks of cooking and speaking to your
friend over the phone. You could do these two things simultaneously.
You could cook as well as speak over the phone. Now you are doing
your tasks parallelly.

• Parallelism means performing two or more tasks simultaneously.


Parallel computing in computer science refers to the process of
performing multiple calculations simultaneously.
How is concurrency related to
parallelism ?
• Concurrency and Parallelism refer to computer architectures which
focus on how our tasks or computations are performed.
• In a single core environment, concurrency happens with tasks
executing over same time period via context switching i.e at a
particular time period, only a single task gets executed.
• In a multi-core environment, concurrency can be achieved via
parallelism in which multiple tasks are executed simultaneously.
Threads & Processes
Threads

Threads are a sequence of execution of code which can be


executed independently of one another. It is the smallest
unit of tasks that can be executed by an OS. A program can
be single threaded or multi-threaded.

Process

A process is an instance of a running program. A program


can have multiple processes. A process usually starts with
a single thread i.e a primary thread but later down the line
of execution it can create multiple threads.
Synchronous and Asynchronous
Synchronous Asynchronous

• Imagine you were given to write two letters • Imagine you were given to make a
one to your mom and another to your best sandwich and wash your clothes in a
friend. You can not at the same time write washing machine. You could put your
two letters unless you are a pro clothes in the washing machine and
ambidextrous. without waiting for it to be done, you
• In a synchronous programming model, tasks could go and make the sandwich. Here
are executed one after another. Each task you performed these two tasks
waits for any previous task to complete and asynchronously.
then gets executed. • In an asynchronous programming model,
• Best suited for simple applications or CPU- when one task gets executed, you could
bound tasks where operations need to be switch to a different task without waiting
performed in a specific order. for the previous to get completed.
What is the role of synchronous and asynchronous programming in
concurrency and parallelism?

Both synchronous and asynchronous programming can be used to


achieve concurrency and parallelism, but asynchronous programming is
particularly powerful for managing concurrency in I/O-bound tasks,
while synchronous programming (with multiprocessing) is better suited
for achieving parallelism in CPU-bound tasks.
Parallel
computing?
• What is parallel computing?

– Use of multiple processors or computers working


together on a common task
– Each processor works on its section of
the problem
– Processors may exchange information
Parallel computing?
Parallel Computing?
• Parallel computing: using multiple processors in
parallel to solve problems more quickly than with
a single processor

• Examples of parallel machines:


– A cluster computer that contains multiple PCs
combined together with a high speed network
– A shared memory multiprocessor (Symmetric Multi-
Processor) by connecting multiple processors to a
single memory system
– A Chip Multi-Processor (CMP) contains multiple
processors (called cores) on a single chip
Why do parallel computing?
1. Limits of single CPU computing:
• Performance
• Available memory, etc.

2. To solve large problems:


• Problems that don’t fit on a single CPU
• Problems that can’t be solved in a
reasonable time (weather forecast, etc.)
Why do parallel computing?
3. Faster Results
• Time-critical problems, can be solved faster

4. ALL Computers are parallel these days:


• Desktops
• Laptops
• Even smart-phones has multiple cores
• Etc.
Single Processor
Systems
– From mid-1980s until 2004 computers got
faster:
• More number of transistors were
packed
every year.
• Size of transistors continued to get
smaller
• Resulted in faster processors (frequency
scaling method  more GHzs means more
performance)
Moore’s
Law
• Increased density of components on a chip
• Gordon Moore – co-founder of Intel
The number of transistors on a chip will double every
year

• Since the 1970’s development has slowed a little:


– Number of transistors doubled every 18 months
• Higher packing density means shorter
electrical
paths, giving higher performance

You might also like