CMP 252 - Parallelism Fundamentals
CMP 252 - Parallelism Fundamentals
Text:
Introduction to Parallel Computing
Author: Blaise Barney, Lawrence Livermore National Laboratory
                                                                2
Table of Contents
   1. Abstract
   2. Overview
         1. What is Parallel Computing?
         2. Why Use Parallel Computing?
         3. Who is Using Parallel Computing?
   3. Concepts and Terminology
         1. von Neumann Computer Architecture
         2. Flynn's Classical Taxonomy
         3. Some General Parallel Terminology
         4. Limits and Costs of Parallel Programming
   4. Parallel Computer Memory Architectures
         1. Shared Memory
         2. Distributed Memory
         3. Hybrid Distributed-Shared Memory
   5. Parallel Programming Models
         1. Overview
         2. Shared Memory Model
         3. Threads Model
         4. Distributed Memory / Message Passing Model
         5. Data Parallel Model
         6. Hybrid Model
         7. SPMD and MPMP
   6. Designing Parallel Programs
         1. Automatic vs. Manual Parallelization
         2. Understand the Problem and the Program
         3. Partitioning
         4. Communications
         5. Synchronization
         6. Data Dependencies
         7. Load Balancing
         8. Granularity
         9. I/O
         10. Debugging
         11. Performance Analysis and Tuning
   7. Parallel Examples
         1. Array Processing
         2. PI Calculation
         3. Simple Heat Equation
         4. 1-D Wave Equation
   8. References and More Information
                                                                                                       3
Abstract
This tutorial is intended to provide only a very quick overview of the extensive and broad topic of
Parallel Computing. As such, it covers just the very basics of parallel computing, and is intended for
someone who is just becoming acquainted with the subject and who is planning to attend one or more
other tutorials. It is not intended to cover Parallel Programming in depth, as this would require
significantly more time. The tutorial begins with a discussion on parallel computing - what it is and how
it's used, followed by a discussion on concepts and terminology associated with parallel computing.
The topics of parallel memory architectures and programming models are then explored. These topics
are followed by a series of practical discussions on a number of the complex issues related to
designing and running parallel programs. The tutorial concludes with several examples of how to
parallelize simple serial programs.
For example:
Parallel Computing:
   In the simplest sense, parallel computing is the simultaneous use of multiple computing
    resources to solve a computational problem:
        o A problem is broken into discrete parts that can be solved concurrently
        o Each part is further broken down to a series of instructions
        o Instructions from each part execute simultaneously on different processors
        o An overall control/coordination mechanism is employed
                                                                                                5
For example:
Parallel Computers:
   Virtually all stand-alone computers today are parallel from a hardware perspective:
        o Multiple functional units (L1 cache, L2 cache, branch, prefetch, decode, floating-point,
            graphics processing (GPU), integer, etc.)
        o Multiple execution units/cores
        o Multiple hardware threads
                                                                                           6
IBM BG/Q Compute Chip with 18 cores (PU) and 16 L2 Cache units (L2)
   Networks connect multiple stand-alone computers (nodes) to make larger parallel computer
    clusters.
   For example, the schematic below shows a typical LLNL parallel computer cluster:
       o Each compute node is a multi-processor parallel computer in itself
       o Multiple compute nodes are networked together with an Infiniband network
       o Special purpose nodes, also multi-processor, are used for other purposes
                                                                                                 7
   The majority of the world's large parallel computers (supercomputers) are clusters of hardware
    produced by a handful of (mostly) well known vendors.
                                                                                                       8
Source: Top500.org
Main Reasons:
   PROVIDE CONCURRENCY:
      o A single computing resource can only do one thing at a time. Multiple computing resources
        can do many things simultaneously.
      o Example: Collaborative Networks provide a global venue where people from around the
        world can meet and conduct work "virtually".
                                                                                                              11
The Future:
   During the past 20+ years, the trends indicated by ever faster networks, distributed systems, and
    multi-processor computer architectures (even at the desktop level) clearly show that parallelism is the
    future of computing.
   In this same time period, there has been a greater than 500,000x increase in supercomputer
    performance, with no end currently in sight.
                                                     12
    Source: Top500.org
                                                                                                            13
    Historically, parallel computing has been considered to be "the high end of computing", and has been used
     to model difficult problems in many areas of science and engineering:
    Today, commercial applications provide an equal or greater driving force in the development of faster
     computers. These applications require the processing of large amounts of data in sophisticated ways. For
     example:
Global Applications:
   Parallel computing is now being used extensively around the world, in a wide variety of
    applications.
    Source: Top500.org
                                                                                                           15
                                                            PDP1
                  CDC 7600                                                        Dell Laptop
    Most modern computers, particularly those with graphics processor units (GPUs) employ SIMD
     instructions and execution units.
              ILLIAC IV                     MasPar
 
    Examples: most current supercomputers, networked parallel computer clusters and "grids",
     multi-processor SMP computers, multi-core PCs.
    Note: many MIMD architectures also include SIMD execution sub-components
 
Task
A logically discrete section of computational work. A task is typically a program or program-like
set of instructions that is executed by a processor. A parallel program consists of multiple tasks
running on multiple processors.
Pipelining
Breaking a task into steps performed by different processor units, with inputs streaming through,
much like an assembly line; a type of parallel computing.
Shared Memory
From a strictly hardware point of view, describes a computer architecture where all processors
have direct (usually bus based) access to common physical memory. In a programming sense, it
describes a model where parallel tasks all have the same "picture" of memory and can directly
address and access the same logical memory locations regardless of where the physical
memory actually exists.
Symmetric Multi-Processor (SMP)
Shared memory hardware architecture where multiple processors share a single address space
and have equal access to all resources.
Distributed Memory
In hardware, refers to network based memory access for physical memory that is not common.
As a programming model, tasks can only logically "see" local machine memory and must use
communications to access memory on other machines where other tasks are executing.
Communications
Parallel tasks typically need to exchange data. There are several ways this can be
accomplished, such as through a shared memory bus or over a network, however the actual
event of data exchange is commonly referred to as communications regardless of the method
employed.
Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Often
implemented by establishing a synchronization point within an application where a task may not
proceed further until another task(s) reaches the same or logically equivalent point.
                                                                                              22
Synchronization usually involves waiting by at least one task, and can therefore cause a parallel
application's wall clock execution time to increase.
Granularity
In parallel computing, granularity is a qualitative measure of the ratio of computation to
communication.
Observed Speedup
Observed speedup of a code which has been parallelized, defined as:
  wall-clock time of serial execution
  -----------------------------------
 wall-clock time of parallel execution
One of the simplest and most widely used indicators for a parallel program's performance.
Parallel Overhead
The amount of time required to coordinate parallel tasks, as opposed to doing useful work.
Parallel overhead can include factors such as:
   o Task start-up time
   o Synchronizations
   o Data communications
   o Software overhead imposed by parallel languages, libraries, operating system, etc.
   o Task termination time
Massively Parallel
Refers to the hardware that comprises a given parallel system - having many processing
elements. The meaning of "many" keeps increasing, but currently, the largest parallel computers
are comprised of processing elements numbering in the hundreds of thousands to millions.
Embarrassingly Parallel
Solving many similar, but independent tasks simultaneously; little to no need for coordination
between the tasks.
Scalability
Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate
increase in parallel speedup with the addition of more resources. Factors that contribute to
scalability include:
                  1
      speedup = ------
                1 - P
     If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).
     If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).
     If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as
      fast.
     Introducing the number of processors performing the parallel fraction of work, the relationship can
      be modeled by:
               1
speedup = ------------
           P   + S
                 ---
                  N
                                                                                                    24
 It soon becomes obvious that there are limits to the scalability of parallelism. For example:
                                 speedup
                       --------------------------------
        N              P = .50      P = .90     P = .99
      -----            -------      -------     -------
          10              1.82         5.26        9.17
        100               1.98         9.17       50.25
      1,000               1.99         9.91       90.99
     10,000               1.99         9.91       99.02
    100,000               1.99         9.99       99.90
   However, certain problems demonstrate increased performance by increasing the problem size.
    For example:
    We can increase the problem size by doubling the grid dimensions and halving the time step.
    This results in four times the number of grid points and twice the number of time steps. The
    timings then look like:
                                                                                                    25
   Problems that increase the percentage of parallel time with their size are more scalable than
    problems with a fixed percentage of parallel time.
Complexity:
   In general, parallel applications are much more complex than corresponding serial applications,
    perhaps an order of magnitude. Not only do you have multiple instruction streams executing at
    the same time, but you also have data flowing between them.
   The costs of complexity are measured in programmer time in virtually every aspect of the
    software development cycle:
        o Design
        o Coding
        o Debugging
        o Tuning
        o Maintenance
   Adhering to "good" software development practices is essential when working with parallel
    applications - especially if somebody besides you will have to work with the software.
Portability:
   Thanks to standardization in several APIs, such as MPI, POSIX threads, and OpenMP,
    portability issues with parallel programs are not as serious as in years past. However...
   All of the usual portability issues associated with serial programs apply to parallel programs. For
    example, if you use vendor "enhancements" to Fortran, C or C++, portability will be a problem.
   Even though standards exist for several APIs, implementations will differ in a number of details,
    sometimes to the point of requiring code modifications in order to effect portability.
   Operating systems can play a key role in code portability issues.
   Hardware architectures are characteristically highly variable and can affect portability.
Resource Requirements:
   The primary intent of parallel programming is to decrease execution wall clock time, however in
    order to accomplish this, more CPU time is required. For example, a parallel code that runs in 1
    hour on 8 processors actually uses 8 hours of CPU time.
   The amount of memory required can be greater for parallel codes than serial codes, due to the
    need to replicate data and for overheads associated with parallel support libraries and
    subsystems.
   For short running parallel programs, there can actually be a decrease in performance compared
    to a similar serial implementation. The overhead costs associated with setting up the parallel
    environment, task creation, communications and task termination can comprise a significant
    portion of the total execution time for short runs.
                                                                                                    26
Scalability:
Shared Memory
 General Characteristics:
  Shared memory parallel computers vary widely, but generally have in common the ability for all
    processors to access all memory as global address space.
  Multiple processors can operate independently but share the same memory resources.
  Changes in a memory location effected by one processor are visible to all other processors.
  Historically, shared memory machines have been classified as UMA and NUMA, based upon
    memory access times.
Advantages:
Disadvantages:
     Primary disadvantage is the lack of scalability between memory and CPUs. Adding more CPUs
      can geometrically increases traffic on the shared memory-CPU path, and for cache coherent
      systems, geometrically increase traffic associated with cache/memory management.
     Programmer responsibility for synchronization constructs that ensure "correct" access of global
      memory.
Distributed Memory
 General Characteristics:
     Like shared memory systems, distributed memory systems vary widely but share a common
      characteristic. Distributed memory systems require a communication network to connect inter-
      processor memory.
                                                                                                   28
   Processors have their own local memory. Memory addresses in one processor do not map to
    another processor, so there is no concept of global address space across all processors.
   Because each processor has its own local memory, it operates independently. Changes it
    makes to its local memory have no effect on the memory of other processors. Hence, the
    concept of cache coherency does not apply.
   When a processor needs access to data in another processor, it is usually the task of the
    programmer to explicitly define how and when data is communicated. Synchronization between
    tasks is likewise the programmer's responsibility.
   The network "fabric" used for data transfer varies widely, though it can be as simple as Ethernet.
Advantages:
   Memory is scalable with the number of processors. Increase the number of processors and the
    size of memory increases proportionately.
   Each processor can rapidly access its own memory without interference and without the
    overhead incurred with trying to maintain global cache coherency.
   Cost effectiveness: can use commodity, off-the-shelf processors and networking.
Disadvantages:
   The programmer is responsible for many of the details associated with data communication
    between processors.
   It may be difficult to map existing data structures, based on global memory, to this memory
    organization.
   Non-uniform memory access times - data residing on a remote node takes longer to access than
    node local data.
                                                                                                     29
     The largest and fastest computers in the world today employ both shared and distributed
      memory architectures.
     The shared memory component can be a shared memory machine and/or graphics processing
      units (GPU).
     The distributed memory component is the networking of multiple shared memory/GPU
      machines, which know only about their own memory - not the memory on another machine.
      Therefore, network communications are required to move data from one machine to another.
     Current trends seem to indicate that this type of memory architecture will continue to prevail and
      increase at the high end of computing for the foreseeable future.
Overview
    There are several parallel programming models in common use:
         o Shared Memory (without threads)
         o Threads
         o Distributed Memory / Message Passing
         o Data Parallel
         o Hybrid
         o Single Program Multiple Data (SPMD)
         o Multiple Program Multiple Data (MPMD)
    Parallel programming models exist as an abstraction above hardware and memory
     architectures.
    Although it might not seem apparent, these models are NOT specific to a particular type of
     machine or memory architecture. In fact, any of these models can (theoretically) be implemented
     on any underlying hardware. Two examples from the past are discussed below.
    Which model to use? This is often a combination of what is available and personal choice.
     There is no "best" model, although there certainly are better implementations of some models
     over others.
    The following sections describe each of the models mentioned above, and also discuss some of
     their actual implementations.
Implementations:
    On stand-alone shared memory machines, native operating systems, compilers and/or hardware
     provide support for shared memory programming. For example, the POSIX standard provides an
     API for using shared memory, and UNIX provides shared memory segments (shmget, shmat,
     shmctl, etc).
    On distributed memory machines, memory is physically distributed across a network of
     machines, but made global through specialized hardware and software. A variety of SHMEM
     implementations are available: http://en.wikipedia.org/wiki/SHMEM.
 
                                                                                                  32
Threads Model
    This programming model is a type of shared memory programming.
    In the threads model of parallel programming, a single "heavy weight" process can have multiple
     "light weight", concurrent execution paths.
    For example:
         o The       main program a.out is
             scheduled to run by the native
             operating system. a.out loads and
             acquires all of the necessary system
             and user resources to run. This is
             the "heavy weight" process.
         o a.out performs some serial work,
             and then creates a number of tasks
             (threads) that can be scheduled and
             run by the operating system
             concurrently.
         o Each thread has local data, but also,
             shares the entire resources of a.out.
             This saves the overhead associated
             with     replicating    a    program's
             resources for each thread ("light
             weight"). Each thread also benefits
             from a global memory view because
             it shares the memory space of a.out.
         o A thread's work may best be
             described as a subroutine within the
             main program. Any thread can
             execute any subroutine at the same
             time as other threads.
         o Threads communicate with each
             other     through    global    memory
             (updating address locations). This
             requires synchronization constructs
             to ensure that more than one thread
             is not updating the same global address at any time.
         o Threads can come and go, but a.out remains present to provide the necessary shared
             resources until the application has completed.
Implementations:
     In both cases, the programmer is responsible for determining the parallelism (although compilers
     can sometimes help).
                                                                                                    33
     Threaded implementations are not new in computing. Historically, hardware vendors have
      implemented their own proprietary versions of threads. These implementations differed
      substantially from each other making it difficult for programmers to develop portable threaded
      applications.
     Unrelated standardization efforts have resulted in two very different implementations of threads:
      POSIX Threads and OpenMP.
     POSIX Threads
         o Specified by the IEEE POSIX 1003.1c standard (1995). C Language only.
         o Part of Unix/Linux operating systems
         o Library based
         o Commonly referred to as Pthreads.
         o Very explicit parallelism; requires significant programmer attention to detail.
     OpenMP
         o Industry standard, jointly defined and endorsed by a group of major computer hardware
             and software vendors, organizations and individuals.
         o Compiler directive based
         o Portable / multi-platform, including Unix and Windows platforms
         o Available in C/C++ and Fortran implementations
         o Can be very easy and simple to use - provides for "incremental parallelism". Can begin
             with serial code.
     Other threaded implementations exist, such as Microsoft's
More Information:
Distributed Memory /
Message Passing
Model
     This model demonstrates the
      following characteristics:
          o A set of tasks that use
             their own local memory
             during       computation.
             Multiple tasks can reside
             on the same physical
             machine and/or across
             an arbitrary number of
             machines.
                                                                                                    34
Implementations:
Implementations:
    Currently, there are several relatively popular, and sometimes developmental, parallel
     programming implementations based on the Data Parallel / PGAS model.
    Coarray Fortran: a small set of extensions to Fortran 95 for SPMD parallel programming.
     Compiler dependent. More information: https://en.wikipedia.org/wiki/Coarray_Fortran
    Unified Parallel C (UPC): an extension to the C programming language for SPMD parallel
     programming. Compiler dependent. More information: http://upc.lbl.gov/
    Global Arrays: provides a shared memory style programming environment in the context of
     distributed array data structures. Public domain library with C and Fortran77 bindings. More
     information: https://en.wikipedia.org/wiki/Global_Arrays
    X10: a PGAS based parallel programming language being developed by IBM at the Thomas J.
     Watson Research Center. More information: http://x10-lang.org/
    Chapel: an open source parallel programming language project being led by Cray. More
     information: http://chapel.cray.com/
Hybrid Model
    A hybrid model combines more than one of the previously described programming models.
    Currently, a common example of a hybrid model is the combination of the message passing model (MPI)
     with the threads model (OpenMP).
        o Threads perform computationally intensive kernels using local, on-node data
        o Communications between processes on different nodes occurs over the network using MPI
    This hybrid model lends itself well to the most popular (currently) hardware environment of clustered
     multi/many-core machines.
                                                                                                       36
    Another similar and increasingly popular example of a hybrid model is using MPI with CPU-GPU
     (Graphics Processing Unit) programming.
        o MPI tasks run on CPUs using local memory and communicating with each other over a network.
        o Computationally intensive kernels are off-loaded to GPUs on-node.
        o Data exchange between node-local memory and GPUs uses CUDA (or something equivalent).
    Other hybrid models are common:
        o MPI with Pthreads
        o MPI with non-GPU accelerators
        o ...
         o   The calculation of the F(n) value uses those of both F(n-1) and F(n-2), which must be
             computed first.
Partitioning
     One of the first steps in designing a parallel program is to break the problem into discrete
      "chunks" of work that can be distributed to multiple tasks. This is known as decomposition or
      partitioning.
     There are two basic ways to partition computational work among parallel tasks: domain
      decomposition and functional decomposition.
                                                                                                   39
Domain Decomposition:
   In this type of partitioning, the data associated with a problem is decomposed. Each parallel task
    then works on a portion of the data.
Functional Decomposition:
   In this approach, the focus is on the computation that is to be performed rather than on the data
    manipulated by the computation. The problem is decomposed according to the work that must
    be done. Each task then performs a portion of the overall work.
                                                                                                    40
   Functional decomposition lends itself well to problems that can be split into different tasks. For
    example:
    Ecosystem Modeling
    Each program calculates the population of a given group, where each group's growth depends
    on that of its neighbors. As time progresses, each process calculates its current state, then
    exchanges information with the neighbor populations. All tasks then progress to calculate the
    state at the next time step.
    Signal Processing
    An audio signal data set is passed through four distinct computational filters. Each filter is a
    separate process. The first segment of data must pass through the first filter before progressing
    to the second. When it does, the second segment of data passes through the first filter. By the
    time the fourth segment of data is in the first filter, all four tasks are busy.
                                                                                                  41
     Climate Modeling
     Each model component can be thought of as a separate task. Arrows represent exchanges of
     data between components during computation: the atmosphere model generates wind velocity
     data that are used by the ocean model, the ocean model generates sea surface temperature
     data that are used by the atmosphere model, and so on.
Communications
 Who Needs Communications?
The need for communications between tasks depends upon your problem:
Factors to Consider:
    There are a number of important factors to consider when designing your program's inter-task
    communications:
   Cost of communications
       o Inter-task communication virtually always implies overhead.
       o Machine cycles and resources that could be used for computation are instead used to
           package and transmit data.
       o Communications frequently require some type of synchronization between tasks, which
           can result in tasks spending time "waiting" instead of doing work.
       o Competing communication traffic can saturate the available network bandwidth, further
           aggravating performance problems.
   Latency vs. Bandwidth
       o latency is the time it takes to send a minimal (0 byte) message from point A to point B.
           Commonly expressed as microseconds.
       o bandwidth is the amount of data that can be communicated per unit of time. Commonly
           expressed as megabytes/sec or gigabytes/sec.
       o Sending many small messages can cause latency to dominate communication
           overheads. Often it is more efficient to package small messages into a larger message,
           thus increasing the effective communications bandwidth.
   Visibility of communications
       o With the Message Passing Model, communications are explicit and generally quite visible
           and under the control of the programmer.
       o With the Data Parallel Model, communications often occur transparently to the
           programmer, particularly on distributed memory architectures. The programmer may not
           even be able to know exactly how inter-task communications are being accomplished.
   Synchronous vs. asynchronous communications
       o Synchronous communications require some type of "handshaking" between tasks that are
           sharing data. This can be explicitly structured in code by the programmer, or it may
           happen at a lower level unknown to the programmer.
       o Synchronous communications are often referred to as blocking communications since
           other work must wait until the communications have completed.
       o Asynchronous communications allow tasks to transfer data independently from one
           another. For example, task 1 can prepare and send a message to task 2, and then
           immediately begin doing other work. When task 2 actually receives the data doesn't
           matter.
       o Asynchronous communications are often referred to as non-blocking communications
           since other work can be done while the communications are taking place.
       o Interleaving computation with communication is the single greatest benefit for using
           asynchronous communications.
   Scope of communications
       o Knowing which tasks must communicate with each other is critical during the design
           stage of a parallel code. Both of the two scopings described below can be implemented
           synchronously or asynchronously.
       o Point-to-point - involves two tasks with one task acting as the sender/producer of data,
           and the other acting as the receiver/consumer.
                                                                                            43
       o   Collective - involves data sharing between more than two tasks, which are often
           specified as being members in a common group, or collective. Some common variations
           (there are more):
   Efficiency of communications
        o Very often, the programmer will have a choice with regard to factors that can affect
           communications performance. Only a few are mentioned here.
        o Which implementation for a given model should be used? Using the Message Passing
           Model as an example, one MPI implementation may be faster on a given hardware
           platform than another.
        o What type of communication operations should be used? As mentioned previously,
           asynchronous communication operations can improve overall program performance.
        o Network media - some platforms may offer more than one network for communications.
           Which one is best?
                                                                                                 44
Synchronization
    Managing the sequence of work and the tasks
     performing it is a critical design consideration for
     most parallel programs.
    Can be a significant factor in program performance
     (or lack of it)
    Often requires "serialization" of segments of the
     program.
Types of Synchronization:
    Barrier
        o Usually implies that all tasks are involved
        o Each task performs its work until it reaches the barrier. It then stops, or "blocks".
        o When the last task reaches the barrier, all tasks are synchronized.
        o What happens from here varies. Often, a serial section of work must be done. In other
           cases, the tasks are automatically released to continue their work.
    Lock / semaphore
        o Can involve any number of tasks
        o Typically used to serialize (protect) access to global data or a section of code. Only one
           task at a time may use (own) the lock / semaphore / flag.
                                                                                                      45
         o The first task to acquire the lock "sets" it. This task can then safely (serially) access the
           protected data or code.
        o Other tasks can attempt to acquire the lock but must wait until the task that owns the lock
           releases it.
        o Can be blocking or non-blocking
     Synchronous communication operations
        o Involves only those tasks executing a communication operation
        o When a task performs a communication operation, some form of coordination is required
           with the other task(s) participating in the communication. For example, before a task can
           perform a send operation, it must first receive an acknowledgment from the receiving task
           that it is OK to send.
        o Discussed previously in the Communications section.
Data Dependencies
 Definition:
Examples:
      DO 500 J = MYSTART,MYEND
         A(J) = A(J-1) * 2.0
      500 CONTINUE
      The value of A(J-1) must be computed before the value of A(J), therefore A(J) exhibits a data
      dependency on A(J-1). Parallelism is inhibited.
      If Task 2 has A(J) and task 1 has A(J-1), computing the correct value of A(J) necessitates:
                                                                                                     46
        o Distributed memory architecture - task 2 must obtain the value of A(J-1) from task 1 after
          task 1 finishes its computation
        o Shared memory architecture - task 2 must read A(J-1) after task 1 updates it
     task 1             task 2
     ------             ------
     X = 2              X = 4
       .                  .
       .                  .
     Y = X**2           Y = X**3
As with the previous example, parallelism is inhibited. The value of Y is dependent on:
Load Balancing
    Load balancing refers to the practice of distributing approximately equal amounts of work among
     tasks so that all tasks are kept busy all of the time. It can be considered a minimization of task
     idle time.
    Load balancing is important to parallel programs for performance reasons. For example, if all
     tasks are subject to a barrier synchronization point, the slowest task will determine the overall
     performance.
                                                                                                        47
Granularity
 Computation / Communication Ratio:
Fine-grain Parallelism:
Coarse-grain Parallelism:
Which is Best?
I/O
 The Bad News:
    I/O that must be conducted over the network (NFS, non-local) can cause severe bottlenecks and
     even crash file servers.
Debugging
    Debugging parallel codes can be incredibly difficult, particularly as codes scale upwards.
    The good news is that there are some excellent debuggers available to assist:
        o Threaded - pthreads and OpenMP
                                                                                                   50
        o    MPI
        o    GPU / accelerator
        o    Hybrid
    Livermore Computing users have access to several parallel debugging tools installed on LC's
     clusters:
         o TotalView from RogueWave Software
         o DDT from Allinea
         o Inspector from Intel
         o Stack Trace Analysis Tool (STAT) - locally developed
    All of these tools have a learning curve associated with them - some more than others.
    For details and getting started information, see:
         o LC's "Supported Software and Computing Tools" web pages at
             https://computing.llnl.gov/?set=code&page=software_tools
         o TotalView tutorial: https://computing.llnl.gov/tutorials/totalview/
o   mpitrace https://computing.llnl.gov/tutorials/bgq/index.html#mpitrace
o   mpiP: http://mpip.sourceforge.net/
o   memP: http://memp.sourceforge.net/
                                                                52
Parallel Examples
Array Processing
     This example demonstrates calculations on 2-dimensional
      array elements; a function is evaluated on each array
      element.
     The computation on each array element is independent
      from other array elements.
     The problem is computationally intensive.
     The serial program calculates one element at a time in
      sequential order.
     Serial code could be of the form:
      do j = 1,n
        do i = 1,n
          a(i,j) = fcn(i,j)
        end do
      end do
     Questions to ask:
         o Is this problem able to be parallelized?
         o How would the problem be partitioned?
         o Are communications needed?
         o Are there any data dependencies?
         o Are there synchronization needs?
         o Will load balancing be a concern?
Array Processing
Parallel Solution 1
     The calculation of elements is independent of one
      another - leads to an embarrassingly parallel
      solution.
     Arrays elements are evenly distributed so that each
      process owns a portion of the array (subarray).
         o Distribution scheme is chosen for efficient
             memory access; e.g. unit stride (stride of 1)
             through the subarrays. Unit stride
             maximizes cache/memory usage.
                                                                                                        53
       o    Since it is desirable to have unit stride through the subarrays, the choice of a distribution
            scheme depends on the programming language. See the Block - Cyclic Distributions
            Diagram for the options.
   Independent calculation of array elements ensures there is no need for communication or
    synchronization between tasks.
   Since the amount of work is evenly distributed across processes, there should not be load
    balance concerns.
   After the array is distributed, each task executes the portion of the loop corresponding to the
    data it owns.
    For example, both Fortran (column-major) and C (row-major) block distributions are shown:
 Notice that only the outer loop variables are different from the serial solution.
   Implement as a Single Program Multiple Data (SPMD) model - every task executes the same
    program.
   Master process initializes array, sends info to worker processes and receives results.
   Worker process receives info, performs its share of computation and sends results to master.
   Using the Fortran storage scheme, perform block distribution of the array.
   Pseudo code solution: red highlights changes for parallelism.
if I am MASTER
    else if I am WORKER
      receive from MASTER info on part of array I own
      receive from MASTER my portion of initial array
                                                                                                     54
endif
Example Programs:
     MPI Program in C:
     MPI Program in Fortran:
Array Processing
Parallel Solution 2: Pool of Tasks
     The previous array solution demonstrated static load balancing:
          o Each task has a fixed amount of work to do
          o May be significant idle time for faster or more lightly loaded processors - slowest tasks
              determines overall performance.
     Static load balancing is not usually a major concern if all tasks are performing the same amount
      of work on identical machines.
     If you have a load balance problem (some tasks work faster than others), you may benefit by
      using a "pool of tasks" scheme.
Master Process:
     Worker processes do not know before runtime which portion of array they will handle or how
      many tasks they will perform.
     Dynamic load balancing occurs at run time: the faster tasks will get more work to do.
     Pseudo code solution: red highlights changes for parallelism.
if I am MASTER
else if I am WORKER
endif
Discussion:
     In the above pool of tasks example, each task calculated an individual array element as a job.
      The computation to communication ratio is finely granular.
     Finely granular solutions incur more communication overhead in order to reduce task idle time.
     A more optimal solution might be to distribute more work with each job. The "right" amount of
      work is problem dependent.
Parallel Examples
PI Calculation
                                                     56
    npoints = 10000
    circle_count = 0
    do j = 1,npoints
      generate 2 random numbers
    between 0 and 1
      xcoordinate = random1
      ycoordinate = random2
      if (xcoordinate,
    ycoordinate) inside circle
      then circle_count =
    circle_count + 1
    end do
PI = 4.0*circle_count/npoints
PI Calculation
Parallel Solution
     Another problem that's easy to parallelize:
         o All point calculations are independent; no data dependencies
         o Work can be evenly divided; no load balance concerns
         o No need for communication or synchronization between tasks
     Parallel strategy:
         o Divide the loop into equal portions that can be executed by the pool of tasks
         o Each task independently performs its work
         o A SPMD model is used
         o One task acts as the master to collect results and compute the value of PI
     Pseudo code solution: red highlights changes for parallelism.
      npoints = 10000
      circle_count = 0
      p = number of tasks
      num = npoints/p
                                                              58
      do j = 1,num
        generate 2 random numbers between 0 and 1
        xcoordinate = random1
        ycoordinate = random2
        if (xcoordinate, ycoordinate) inside circle
        then circle_count = circle_count + 1
      end do
if I am MASTER
else if I am WORKER
endif
Parallel Examples
     do iy = 2, ny - 1
       do ix = 2, nx - 1
         u2(ix, iy) = u1(ix, iy) +
              cx * (u1(ix+1,iy) + u1(ix-
     1,iy) - 2.*u1(ix,iy)) +
              cy * (u1(ix,iy+1) +
     u1(ix,iy-1) - 2.*u1(ix,iy))
       end do
     end do
    Questions to ask:
        o Is this problem able to be parallelized?
        o How would the problem be partitioned?
        o Are communications needed?
        o Are there any data dependencies?
        o Are there synchronization needs?
        o Will load balancing be a concern?
    if I am MASTER
      initialize array
      send each WORKER starting info and subarray
      receive results from each WORKER
    else if I am WORKER
      receive from MASTER starting info and subarray
end do
endif
Example Programs:
   MPI Program in C:
   MPI Program in Fortran:
                                                                                                          61
Parallel Examples
     where c is a constant
     Note that amplitude will depend on previous timesteps (t, t-1) and neighboring points (i-1, i+1).
     Questions to ask:
         o Is this problem able to be parallelized?
         o How would the problem be partitioned?
         o Are communications needed?
         o Are there any data dependencies?
         o Are there synchronization needs?
         o Will load balancing be a concern?
                                                                                                      62
else if I am WORKER`
  receive starting info and subarray from MASTER
endif
end do
https://computing.llnl.gov/tutorials/parallel_comp/
Last Modified: 06/19/2016 18:31:26 [email protected]
UCRL-MI-133316