Parallel Computing Using MPI
Parallel Computing Using MPI
What Is MPI?
Why MPI
Able to be used with C and Fortran programs. C++ and
Fortran 90 language bindings are being addressed by MPI-
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Reasons for using MPI:
Standardization - MPI is the only message passing
library which can be considered a standard. It is
supported on virtually all HPC platforms.
Message Passing
The method by which data from one processor's memory is
copied to the memory of another processor. In distributed
memory systems, data is generally sent as packets of
information over a network from one processor to another.
A message may consist of one or more packets, and usually
includes routing and/or other control information.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Process
A process is a set of executable instructions (program)
which runs on a processor. One or more processes may
execute on a processor. In a message passing system, all
processes communicate with each other by sending
messages - even if they are running on the same
processor. For reasons of efficiency, however, message
passing systems generally associate only one process per
processor.
Send / Receive
Message passing involves the transfer of data from one
process (send) to another process (receive).
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Synchronous / Asynchronous
A synchronous send operation will complete only after
acknowledgement that the message was safely received
by the receiving process. Asynchronous send operations
may "complete" even though the receiving process has
not actually received the message.
Application Buffer
The address space that holds the data which is to be sent
or received. For example, your program uses a variable
called, "inmsg". The application buffer for inmsg is the
program memory location where the value of inmsg
resides.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
System Buffer
System space for storing messages.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Blocking Communication
A communication routine is blocking if the completion of the
call is dependent on certain "events". For sends, the data must
be successfully sent or safely copied to system buffer space
so that the application buffer that contained the data is
available for reuse. For receives, the data must be safely
stored in the receive buffer so that it is ready for use.
Non-blocking Communication
A communication routine is non-blocking if the call returns
without waiting for any communications events to complete
(such as copying of message from user memory to system
memory or arrival of message).
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Rank
Within a communicator, every process has its own unique,
integer identifier assigned by the system when the process
initializes. A rank is sometimes also called a "process ID".
Ranks are contiguous and begin at zero.
Getting Started
Header File
Required for all programs/routines which make MPI
library calls.
C Binding
rc =
Example: MPI_Bsend(&buf,count,type,dest
,tag,comm)