Lecture 1 - Introduction To Parallel Computing
Lecture 1 - Introduction To Parallel Computing
Page 1 Introduction
Introduction To Parallel
to High Performance Computing
Computing
Topic Description
This topic introduces the students:
Motivating Parallelism
Scope of Parallel Computing
Parallel Computing Terminologies
Parallel Task
◦ A task that can be executed by multiple processors safely (yields
correct results)
Serial Execution
◦ Execution of a program sequentially, one statement at a time. In the
simplest sense, this is what happens on a one processor machine.
However, virtually all parallel tasks will have sections of a parallel
program that must be executed serially.
Shared Memory
◦ From a strictly hardware point of view, describes a computer
architecture where all processors have direct (usually bus based) access
to common physical memory. In a programming sense, it describes a
model where parallel tasks all have the same "picture" of memory and
can directly address and access the same logical memory locations
regardless of where the physical memory actually exists.
Distributed Memory
◦ In hardware, refers to network based memory access for physical
memory that is not common. As a programming model, tasks can only
logically "see" local machine memory and must use communications to
access memory on other machines where other tasks are executing.
Synchronization
◦ The coordination of parallel tasks in real time, very often associated
with communications. Often implemented by establishing a
synchronization point within an application where a task may not
proceed further until another task(s) reaches the same or logically
equivalent point.
◦ Synchronization usually involves waiting by at least one task, and can
therefore cause a parallel application's wall clock execution time to
increase.
Observed Speedup
◦ Observed speedup of a code which has been parallelized, defined as:
wall-clock time of serial execution
wall-clock time of parallel execution
◦ One of the simplest and most widely used indicators for a parallel
program's performance.
Massively Parallel
◦ Refers to the hardware that comprises a given parallel system - having
many processors. The meaning of many keeps increasing, but currently
BG/L pushes this number to 6 digits.
Lecture2:
Parallel Platforms
(Part 1)