0% found this document useful (0 votes)
43 views

Introduction To Parallel Computing-Dr Nousheen

This document provides an introduction to parallel computing. It defines parallel computing as using multiple compute resources simultaneously to solve a computational problem by breaking it into discrete parts that can be solved concurrently. The resources can include multiple CPUs, GPUs, or connected computers. Parallel computing is used to save time, solve larger problems, and provide concurrency. It discusses Flynn's taxonomy of parallel architectures and provides definitions for key parallel computing terminology.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Introduction To Parallel Computing-Dr Nousheen

This document provides an introduction to parallel computing. It defines parallel computing as using multiple compute resources simultaneously to solve a computational problem by breaking it into discrete parts that can be solved concurrently. The resources can include multiple CPUs, GPUs, or connected computers. Parallel computing is used to save time, solve larger problems, and provide concurrency. It discusses Flynn's taxonomy of parallel architectures and provides definitions for key parallel computing terminology.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Introduction to Parallel

Computing

Dr. Nausheen Shoaib


Marks Distribution
► Assignment, Project, Quiz : 20%
► Mid Terms : 30%
► Final: 50%

► Google Classroom: priwxlk


Book:
1. Introduction to Parallel Computing, Second Edition
by Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar
2. Programming Parallel Processors, second Edition
by by David Kirk
3. Big Data Systems: A 360 degree Approach
by Jawwad Shamsi
What is Parallel Computing? (1)

► Traditionally, software has been written for serial computation:


► To be run on a single computer having a single Central Processing Unit (CPU);
► A problem is broken into a discrete series of instructions.
► Instructions are executed one after another.
► Only one instruction may execute at any moment in time.
What is Parallel Computing? (2)
► In the simplest sense, parallel computing is the simultaneous use of
multiple compute resources to solve a computational problem.
► To be run using multiple CPUs
► A problem is broken into discrete parts that can be solved concurrently
► Each part is further broken down to a series of instructions
► Instructions from each part execute simultaneously on different CPUs
Parallel Computing: Resources

► The compute resources can include:


► A single computer with multiple processors;
► A single computer with (multiple) processor(s) and some specialized computer
resources (GPU, FPGA …)
► An arbitrary number of computers connected by a network;
► A combination of both.
Parallel Computing: The computational
problem
► The computational problem usually demonstrates characteristics such as the
ability to be:
► Broken apart into discrete pieces of work that can be solved simultaneously;
► Execute multiple program instructions at any moment in time;
► Solved in less time with multiple compute resources than with a single compute
resource.
Parallel Computing: what for? (3)
► Example applications include:

► parallel databases, data mining

► web search engines, web based business services

► computer-aided diagnosis in medicine

► advanced graphics and virtual reality, particularly in the entertainment


industry
► networked video and multi-media technologies
► Ultimately, parallel computing is an attempt to maximize the infinite
but seemingly scarce commodity called time.
Why Parallel Computing?

► This is a legitime question! Parallel computing is complex on any aspect!

► The primary reasons for using parallel computing:


► Save time - wall clock time
► Solve larger problems
► Provide concurrency (do multiple things at the same time)
Limitations of Serial Computing

► Limits to serial computing - both physical and practical reasons pose


significant constraints to simply building ever faster serial computers.

► Transmission speeds - the speed of a serial computer is directly dependent


upon how fast data can move through hardware. Absolute limits are the speed
of light (30 cm/nanosecond) and the transmission limit of copper wire (9
cm/nanosecond). Increasing speeds necessitate increasing proximity of
processing elements.

► Economic limitations - it is increasingly expensive to make a single processor


faster. Using a larger number of moderately fast commodity processors to
achieve the same (or better) performance is less expensive.
Flynn Taxanomy

► The matrix below defines the 4 possible classifications according to Flynn


Flynn’s Taxonomy

PU = Processing Unit
Single Instruction, Single Data (SISD)
► A serial (non-parallel) computer
► Single instruction: only one instruction stream is being
acted on by the CPU during any one clock cycle
► Single data: only one data stream is being used as
input during any one clock cycle
► Deterministic execution
► This is the oldest and until recently, the most
prevalent form of computer
► Examples: most PCs, single CPU workstations and
mainframes
Single Instruction,

Multiple Data (SIMD)
A type of parallel computer
► Single instruction: All processing units execute the same instruction at any
given clock cycle
► Multiple data: Each processing unit can operate on a different data element
► This type of machine typically has an instruction dispatcher, a very
high-bandwidth internal network, and a very large array of very small-capacity
instruction units.
► Best suited for specialized problems characterized by a high degree of
regularity,such as image processing.
► Synchronous (lockstep) and deterministic execution
► Two varieties: Processor Arrays and Vector Pipelines
► Examples:
► Processor Arrays: Connection Machine CM-2, Maspar MP-1, MP-2
► Vector Pipelines: IBM 9000, Cray C90, Fujitsu VP, NEC SX-2, Hitachi S820
Multiple► Instruction, Single Data (MISD)
A single data stream is fed into multiple processing units.
► Each processing unit operates on the data independently via
independent instruction streams.
► Few actual examples of this class of parallel computer have
ever existed. One is the experimental Carnegie-Mellon C.mmp
computer (1971).
► Some conceivable uses might be:
► multiple frequency filters operating on a single signal stream
► multiple cryptography algorithms attempting to crack a single
coded message.
Multiple► Instruction, Multiple Data
Currently, the most common type of parallel computer. Most
(MIMD) modern computers fall into this category.
► Multiple Instruction: every processor may be executing a
different instruction stream
► Multiple Data: every processor may be working with a different
data stream
► Execution can be synchronous or asynchronous, deterministic
or non-deterministic
► Examples: most current supercomputers, networked parallel
computer "grids" and multi-processor SMP computers -
including some types of PCs.
Some General Parallel Terminology

► Task
► A logically discrete section of computational work. A task is typically a
program or program-like set of instructions that is executed by a
processor.
► Parallel Task
► A task that can be executed by multiple processors safely (yields correct
results)
► Serial Execution
► Execution of a program sequentially, one statement at a time. In the
simplest sense, this is what happens on a one processor machine.
However, virtually all parallel tasks will have sections of a parallel
program that must be executed serially.
Symmetric vs. Asymmetric Multiprocessing
Architecture [1/2]
Some General Parallel Terminology

► Parallel Execution
► Execution of a program by more than one task, with each task being able
to execute the same or different statement at the same moment in time.
► Shared Memory
► From a strictly hardware point of view, describes a computer architecture
where all processors have direct (usually bus based) access to common
physical memory. In a programming sense, it describes a model where
parallel tasks all have the same "picture" of memory and can directly
address and access the same logical memory locations regardless of where
the physical memory actually exists.
► Distributed Memory
► In hardware, refers to network based memory access for physical memory
that is not common. As a programming model, tasks can only logically
"see" local machine memory and must use communications to access
memory on other machines where other tasks are executing.
Some General Parallel Terminology

► Communications
► Parallel tasks typically need to exchange data. There are several ways this
can be accomplished, such as through a shared memory bus or over a
network, however the actual event of data exchange is commonly
referred to as communications regardless of the method employed.
► Synchronization
► The coordination of parallel tasks in real time, very often associated with
communications. Often implemented by establishing a synchronization
point within an application where a task may not proceed further until
another task(s) reaches the same or logically equivalent point.
► Synchronization usually involves waiting by at least one task, and can
therefore cause a parallel application's wall clock execution time to
increase.
Some General Parallel Terminology

► Granularity
► In parallel computing, granularity is a qualitative measure of the ratio of
computation to communication.
► Coarse: relatively large amounts of computational work are done between
communication events
► Fine: relatively small amounts of computational work are done between
communication events
► Observed Speedup
► Observed speedup of a code which has been parallelized, defined as:
wall-clock time of serial execution
wall-clock time of parallel execution
► One of the simplest and most widely used indicators for a parallel program's
performance.
Observed Speedup
Some General Parallel Terminology
► Parallel Overhead
► The amount of time required to coordinate parallel tasks, as opposed to
doing useful work. Parallel overhead can include factors such as:
► Task start-up time
► Synchronizations
► Data communications
► Software overhead imposed by parallel compilers, libraries, tools, operating
system, etc.
► Task termination time
► Massively Parallel
► Refers to the hardware that comprises a given parallel system - having
many processors. The meaning of many keeps increasing, but currently
BG/L pushes this number to 6 digits.
Some General Parallel Terminology

► Scalability
► Refers to a parallel system's (hardware and/or software) ability to demonstrate a
proportionate increase in parallel speedup with the addition of more processors.
Factors that contribute to scalability include:
► Hardware - particularly memory-cpu bandwidths and network communications
► Application algorithm
► Parallel overhead related
► Characteristics of your specific application and coding
Shared ►Memory
Shared memory parallel computers vary widely, but generally have
in common the ability for all processors to access all memory as
global address space.

► Multiple processors can operate independently but share the same


memory resources.
► Changes in a memory location effected by one processor are visible
to all other processors.
► Shared memory machines can be divided into two main classes
based upon memory access times: UMA and NUMA.
Shared Memory : UMA vs. NUMA
► Uniform Memory Access (UMA):
► Most commonly represented today by Symmetric Multiprocessor (SMP) machines
► Identical processors
► Equal access and access times to memory
► Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor
updates a location in shared memory, all the other processors know about the update.
Cache coherency is accomplished at the hardware level.
► Non-Uniform Memory Access (NUMA):
► Often made by physically linking two or more SMPs
► One SMP can directly access memory of another SMP
► Not all processors have equal access time to all memories
► Memory access across link is slower
► If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent
NUMA
Cache Coherency
► Cache coherent means if one processor updates a location
in shared memory, all the other processors know about
the update.
► Cache coherency is accomplished at the hardware level.

An interconnection network on the chip is used to update


all caches using some protocol.
Shared Memory: Pro and Con

► Advantages
► Global address space provides a user-friendly programming perspective to
memory
► Data sharing between tasks is both fast and uniform due to the proximity
of memory to CPUs
► Disadvantages:
► Primary disadvantage is the lack of scalability between memory and CPUs.
Adding more CPUs can geometrically increases traffic on the shared
memory-CPU path, and for cache coherent systems, geometrically
increase traffic associated with cache/memory management.
► Programmer responsibility for synchronization constructs that insure
"correct" access of global memory.
► Expense: it becomes increasingly difficult and expensive to design and
produce shared memory machines with ever increasing numbers of
processors.
Distributed ►
Memory
Like shared memory systems, distributed memory systems vary widely but
share a common characteristic. Distributed memory systems require a
communication network to connect inter-processor memory.
► Processors have their own local memory. Memory addresses in one processor
do not map to another processor, so there is no concept of global address
space across all processors.
► Because each processor has its own local memory, it operates independently.
Changes it makes to its local memory have no effect on the memory of other
processors. Hence, the concept of cache coherency does not apply.
► When a processor needs access to data in another processor, it is usually the
task of the programmer to explicitly define how and when data is
communicated. Synchronization between tasks is likewise the programmer's
responsibility.
► The network "fabric" used for data transfer varies widely, though it can can be
as simple as Ethernet.
Distributed Memory: Pro and Con

► Advantages
► Memory is scalable with number of processors. Increase the number of
processors and the size of memory increases proportionately.
► Each processor can rapidly access its own memory without interference and
without the overhead incurred with trying to maintain cache coherency.
► Cost effectiveness: can use commodity, off-the-shelf processors and
networking.
► Disadvantages
► The programmer is responsible for many of the details associated with data
communication between processors.
► It may be difficult to map existing data structures, based on global memory, to
this memory organization.
► Non-uniform memory access (NUMA) times
Hybrid Distributed-Shared Memory
Summarizing a few of the key characteristics of shared and
distributed memory machines
Comparison of Shared and Distributed Memory Architectures

Architecture CC-UMA CC-NUMA Distributed

Examples SMPs Bull NovaScale Cray T3E


Sun Vexx SGI Origin Maspar
DEC/Compaq Sequent IBM SP2
SGI Challenge HP Exemplar IBM BlueGene
IBM POWER3 DEC/Compaq
IBM POWER4 (MCM)
Communications MPI MPI MPI
Threads Threads
OpenMP OpenMP
shmem shmem
Scalability to 10s of processors to 100s of processors to 1000s of processors

Draw Backs Memory-CPU bandwidth Memory-CPU bandwidth System administration


Non-uniform access times Programming is hard to
develop and maintain
Software Availability many 1000s ISVs many 1000s ISVs 100s ISVs
Hybrid Distributed-Shared Memory
► The largest and fastest computers in the world today employ both
shared and distributed memory architectures.

► The shared memory component is usually a cache coherent SMP


machine. Processors on a given SMP can address that machine's
memory as global.
► The distributed memory component is the networking of multiple
SMPs. SMPs know only about their own memory - not the memory on
another SMP. Therefore, network communications are required to
move data from one SMP to another.
► Current trends seem to indicate that this type of memory architecture
will continue to prevail and increase at the high end of computing for
the foreseeable future.
► Advantages and Disadvantages: whatever is common to both shared
and distributed memory architectures.
* Implementable on shared memory or distributed Memory
Data Parallel Model
► The data parallel model demonstrates the following characteristics:
► Most of the parallel work focuses on performing operations on a data set. The data set is typically
organized into a common structure, such as an array (1D-2D) or cube (3D).
► A set of tasks work collectively on the same data structure, however, each task works on a different
partition of the same data structure.

► Tasks perform the same operation on their partition of work, for example, "add 4 to every array
element".
► On shared memory architectures, all tasks may have access to the data structure through global
memory.
► On distributed memory architectures the data structure is split up and resides as "chunks" in the
local memory of each task.
Parallel computing makes it possible to speed up
Programming Models computations a hundred, a thousand, or even tens
of thousands of times. The practical difference
between obtaining results in hours, rather than
weeks or years, is substantial.

► There are several parallel programming models in common use:


► Shared Memory
► Threads

► Message Passing

► Parallel programming models exist as an abstraction above hardware and memory architectures.
Shared Memory Model
► In the shared-memory programming model, tasks share a common address space, which they
read and write asynchronously.

► An advantage of this model from the programmer's point of view is that the notion of data
"ownership" is lacking, so there is no need to specify explicitly the communication of data
between tasks.

► Program development is simple.


► However, various mechanisms (e.g. locks / semaphores) may be used to control access to the shared
memory.

► An important disadvantage in terms of performance is that it becomes more difficult to


understand and manage data locality.
Threads Model Implementations
► From a programming perspective, threads implementations commonly comprise:
► A library of subroutines that are called from within parallel source code (Pthreads)
► C Language only
► Very explicit parallelism; requires significant programmer attention to detail.

► A set of compiler directives imbedded in either serial or parallel source code (OpenMP is an API)
► Supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran,[3] on many platforms,
instruction-set architectures and operating systems, including Solaris, AIX, HP-UX, Linux, macOS, and Windows.
► It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
► The programmer is responsible for determining all parallelism in both cases.
Threads Model
► In the threads model of parallel programming, a
single process can have multiple, concurrent
execution paths.

► Threads are commonly associated with shared


memory architectures and operating systems.
Threads Model

► Perhaps the most simple analogy that can be used to describe threads is the concept of a single program
that includes a number of subroutines:
► The main program a.out is scheduled to run by the native operating system. a.out loads and acquires all of the
necessary system and user resources to run.

► a.out performs some serial work, and then creates a number of tasks (threads) that can be scheduled and run by
the operating system concurrently.

► Each thread has local data, but also, shares the entire resources of a.out. This saves the overhead associated
with replicating a program's resources for each thread. Each thread also benefits from a global memory view
because it shares the memory space of a.out.

► A thread's work may best be described as a subroutine within the main program. Any thread can execute any
subroutine at the same time as other threads.

► Threads communicate with each other through global memory (updating address locations). This requires
synchronization constructs to insure that more than one thread is not updating the same global address at any
time.

► Threads can come and go, but a.out remains present to provide the necessary shared resources until the
application has completed.
Message Passing Model
► The message passing model demonstrates the following characteristics:
► A set of tasks that use their own local memory during computation. Multiple tasks can reside on the same
physical machine as well across an arbitrary number of machines.

► Tasks exchange data through communications by sending and receiving messages.

► Data transfer usually requires cooperative operations to be performed by each process. For example, a send
operation must have a matching receive operation.
Topics discussion (in-depth with scenarios)
► What is parallel computing?
► Concurrency vs Parallelism
► Parallel data and Parallel data structures

► Task and Data Parallelism (both in Shared memory and Distributed Memory)
► NADRA Scenario for data parallelism
► Task parallelism scenario
► Distributed System: Single PC to Clusters to Data Center
► Individual PC Racks Cluster Data centers
► How to compute total cores, storage and network capacity of a data center?
► Which problem is a parallel problem?
► Communication and Computations in terms of Granularity
► Multitasking and Multiprogramming
► Parallel System resources
► UMA, NUMA, and Cache Coherency
► What to consider before writing a parallel algorithm?
► Steps to be following to convert a serial program into a parallel one.

You might also like