0% found this document useful (0 votes)
47 views17 pages

Parallel Computing Using MPI

Uploaded by

Winda Viscaria
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views17 pages

Parallel Computing Using MPI

Uploaded by

Winda Viscaria
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 17

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Parallel Computing Using MPI


Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

What Is MPI?

Message Passing Interface: A specification for


message passing libraries, designed to be a standard
for distributed memory, message passing, parallel
computing.

The goal of the Message Passing Interface simply


stated is to provide a widely used standard for
writing message-passing programs. The interface
attempts to establish a practical, portable,
efficient, and flexible standard for message
passing.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

MPI resulted from the efforts of numerous individuals and


groups over the course of 2 years. History:
1980s - early 1990s: Distributed memory, parallel
computing develops, as do a number of incompatible
software tools for writing such programs - usually
with tradeoffs between portability, performance,
functionality and price. Recognition of the need for a
standard arose.
April, 1992: Workshop on Standards for Message
Passing in a Distributed Memory Environment,
sponsored by the Center for Research on Parallel
Computing, Williamsburg, Virginia. The basic features
essential to a standard message passing interface were
discussed, and a working group established to continue
the standardization process. Preliminary draft proposal
developed subsequently.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

November 1993: Supercomputing 93 conference - draft


MPI standard presented.

Final version of draft released in May, 1994 - available on


the WWW at:
http://www.mcs.anl.gov/Projects/mpi/standard.html

Why MPI
Able to be used with C and Fortran programs. C++ and
Fortran 90 language bindings are being addressed by MPI-
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Reasons for using MPI:
Standardization - MPI is the only message passing
library which can be considered a standard. It is
supported on virtually all HPC platforms.

Portability - there is no need to modify your source


code when you port your application to a different
platform which supports MPI.

Performance - vendor implementations should be able


to exploit native hardware features to optimize
performance.

Functionality (over 115 routines)

Availability - a variety of implementations are


available, both vendor and public domain.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

The Message Passing Paradigm


Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Every processor has its own local memory which can be


accessed directly only by its own CPU. Transfer of data
from one processor to another is performed over a
network. Differs from shared memory systems which
permit multiple processors to directly access the same
memory resource via a memory bus.

Message Passing
The method by which data from one processor's memory is
copied to the memory of another processor. In distributed
memory systems, data is generally sent as packets of
information over a network from one processor to another.
A message may consist of one or more packets, and usually
includes routing and/or other control information.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Process
A process is a set of executable instructions (program)
which runs on a processor. One or more processes may
execute on a processor. In a message passing system, all
processes communicate with each other by sending
messages - even if they are running on the same
processor. For reasons of efficiency, however, message
passing systems generally associate only one process per
processor.

Message Passing Library


Usually refers to a collection of routines which are
imbedded in application code to accomplish send, receive
and other message passing operations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Send / Receive
Message passing involves the transfer of data from one
process (send) to another process (receive).
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Synchronous / Asynchronous
A synchronous send operation will complete only after
acknowledgement that the message was safely received
by the receiving process. Asynchronous send operations
may "complete" even though the receiving process has
not actually received the message.

Application Buffer
The address space that holds the data which is to be sent
or received. For example, your program uses a variable
called, "inmsg". The application buffer for inmsg is the
program memory location where the value of inmsg
resides.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

System Buffer
System space for storing messages.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Blocking Communication
A communication routine is blocking if the completion of the
call is dependent on certain "events". For sends, the data must
be successfully sent or safely copied to system buffer space
so that the application buffer that contained the data is
available for reuse. For receives, the data must be safely
stored in the receive buffer so that it is ready for use.

Non-blocking Communication
A communication routine is non-blocking if the call returns
without waiting for any communications events to complete
(such as copying of message from user memory to system
memory or arrival of message).
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

It is not safe to modify or use the application buffer after


completion of a non-blocking send. It is the programmer's
responsibility to insure that the application buffer is free for
reuse.

Communicators and Groups


MPI uses objects called communicators and groups to
define which collection of processes may communicate
with each other.

For now, simply use MPI_COMM_WORLD whenever a


communicator is required - it is the predefined
communicator which includes all of your MPI processes.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Rank
Within a communicator, every process has its own unique,
integer identifier assigned by the system when the process
initializes. A rank is sometimes also called a "process ID".
Ranks are contiguous and begin at zero.

Used by the programmer to specify the source and


destination of messages. Often used conditionally by the
application to control program execution (if rank=0 do
this / if rank=1 do that).
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Getting Started

Header File
Required for all programs/routines which make MPI
library calls.

C include file Fortran include file

#include "mpi.h" include 'mpif.h'

Format of MPI Calls


C names are case sensitive; Fortran names are not.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

C Binding

Format: rc = MPI_Xxxxx(parameter, ... )

rc =
Example: MPI_Bsend(&buf,count,type,dest
,tag,comm)

Returned as "rc". MPI_SUCCESS if


Error code:
successful

You might also like