Mpi 1
Mpi 1
INTRODUCTION TO MPI
A Tutorial with Exercises
By: Rosalinda de Fainchtein, Ph.D.
CSC/NASA GSFC, Code 931
As you follow this tutorial, you will write simple MPI parallel
programs, and learn some of the nuances of MPI.
CONTENTS
What is MPI?
MPI is portable.
Compile as usual
program example1
implicit none
!--Initialize MPI
call MPI_INIT( ierr )
!--Who am I? --- get my rank=myid
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
!--Finalize MPI
call MPI_FINALIZE(irc)
stop
end
Exercise 1
These two parameters, size and rank, can then be used (through block
if statements or otherwise) to differentiate the computations that each
process will execute.
if (my_rank == 0) then
x= ....
y= ....
end if
if (my_rank == size-1) then
z= ...
.....
end if
MPI_COMM_WORLD
Each process can probe for the value of m by calling the MPI routine
MPI_COMM_SIZE
MPI_COMM_RANK
INTRODUCTION TO MPI -- 9.Required Statements
include ’mpif.h’
call MPI_INIT(ierr)
call MPI_FINALIZE(ierr)
/scr/mpi-class/template.f
!--Initialize MPI
call MPI_INIT( ierr ) ! --> Required statement
!--Finalize MPI
call MPI_FINALIZE(irc) ! ---> Required statement
stop
end
y=x2 in process 0
y=x3 in process 1
y=x4 in process 2
and writes a statement from each process that identifies the process
and reports the values of x and y from that process.
Solution to Exercise 2
(See the full program at /scr/mpi-class/exercise2-I.f in jsimpson, or
use anonymous ftp to UniTree to retrieve exercise2-I.f)
program template
.
.
!--Define new variables used
integer ip
real x,y
.
.
{MPI_COMM_RANK and MPI_COMM_SIZE CALLS......}
if(myid == 0) then
y=x**2
else if (myid == 1) then
y=x**3
else if (myid == 2) then
y=x**4
end if
.
.
stop
end
y=x**(2+myid)
write(*,*)’On process ’,myid,’ y=’,y
.
.
stop
end
MPI_INIT
MPI_COMM_SIZE
MPI_COMM_RANK
MPI_SEND
MPI_RECV
MPI_FINALIZE
One process posts a send operation, and the target process posts a
receive for the databeing transferred.
e.g. (pseudocode)
if (my_rank == 0)
& call MPI_SEND(......,1,..)
if (my_rank == 1)
& call MPI_RECV(......,0,.....)
MPI_SEND(buf,count,datatype,dest,tag,comm,ierror)
buf =initial address of send buffer (choice)
MPI_INTEGER
MPI_REAL
MPI_DOUBLE_PRECISION
MPI_COMPLEX
MPI_LOGICAL
MPI_CHARACTER
MPI_BYTE
MPI_PACKED
Any routine that calls MPI_RECV, should declare the status array:
integer status(MPI_STATUS_SIZE)
source
destination
tag
communicator
Exercise 3
Solution of Exercise 3
program template
.
.
integer tag,status(MPI_STATUS_SIZE)
real x,y,buff
.
.
{CALCULATION OF Y ON THE DIFFT. PROCESSES}
.
!--Average the values of y
tag=1
if (myid == 0)then
do ip=1,numprocs-1
call MPI_RECV(buff,1,MPI_REAL,ip,tag,
& MPI_COMM_WORLD,status,ierr)
y=y+buff
end do
y=y/float(numprocs)
write(*,*)’The average value of y is ’,y
else
call MPI_SEND(y ,1,MPI_REAL,0,tag,
& MPI_COMM_WORLD, ierr)
end if
.
.
stop
end
The full program can be found in jsimpson at:
/scr/mpi-class/exercise3.f, or by anonymous ftp to UniTree.
INTRODUCTION TO MPI -- 22.Output from Exercise 3
WILDCARDS
call MPI_RECV(buf,count,datatype,
& source,MPI_ANY_TAG,comm,status, ierror)
call MPI_RECV(buf,count,datatype,
& MPI_ANY_SOURCE,MPI_ANY_TAG,comm,status, ierror)
Blocking vs Non-Blocking
Communications
(you can verify that this code will run to completion without a
problem).
Suppose that the order of the send and receive calls are modified
as follows
!--Exchange messages
if (myid == 0) then
call mpi_recv(b,1,mpi_real,1,tag,MPI_COMM_WORLD,
& status,ierr)
call mpi_send(a,1,mpi_real,1,tag,MPI_COMM_WORLD,ierr)
elseif (myid == 1) then
call mpi_recv(a,1,mpi_real,0,tag,MPI_COMM_WORLD,
& status,ierr)
call mpi_send(b,1,mpi_real,0,tag,MPI_COMM_WORLD,ierr)
end if
The code above is an excerpt from example3_a.f found at
/scr/mpi-class/example3_c.f. It can also be downloaded by
anonymous ftp to UniTree.
Here is why:
IT IS NOT RECOMMENDED!
!--Exchange messages
if (myid == 0) then
call mpi_send(a,1,mpi_real,1,tag,MPI_COMM_WORLD,ierr)
call mpi_recv(b,1,mpi_real,1,tag,MPI_COMM_WORLD,
& status,ierr)
elseif (myid == 1) then
call mpi_send(b,1,mpi_real,0,tag,MPI_COMM_WORLD,ierr)
call mpi_recv(a,1,mpi_real,0,tag,MPI_COMM_WORLD,
& status,ierr)
end if
MPI_SENDRECV(sendbuf,sendcount,s
& dest,sendtag,recvbuf,recvcount,recvtype,
& source,recvtag,comm,status,ierror)
Example 4: Using
MPI_SENDRECV I
Example 4 shows the use of the sendrecv MPI call replacing a
pair of consecutive send and receive calls originating from a
single process. Note that the communications here are the same
as in example3_b, except that the sendrecv insures that no
deadlock occurs.
Example 5: Using
MPI_SENDRECV II
Example 5
Process 0 sends a to process 1 and receives b from process 2:
tag1=1
tag2=2
if (myid == 0) then
call mpi_sendrecv(a,1,mpi_real,1,tag1,
& b,1,mpi_real,2,tag2,
& MPI_COMM_WORLD, status,
& ierr)
elseif (myid==1) then
call mpi_recv(a,1,mpi_real,0,tag1,
& MPI_COMM_WORLD, status,
& ierr)
elseif (myid==2) then
call mpi_send(b,1,mpi_real,0,tag2,
& MPI_COMM_WORLD,
& ierr)
end if
(See /scr/mpi-class/example5.f in jsimpson, or
use anonymous ftp to UniTree to retrieve example5.f)
INTRODUCTION TO MPI -- 32.NON-BLOCKING COMMUNICATIONS
NON-BLOCKING
COMMUNICATIONS
MPI_ISEND(buf,count,datatype,dest,tag,comm,
request,ierror)
MPI_IRECV(buf,count,datatype,source,tag,comm,
request,ierror)
BLOCKING vs
NON-BLOCKING
What is the difference between MPI_ISEND
and MPI_SEND?
MPI_ISEND returns control to the calling routine
immediately after posting the send call, before it is safe to
overwrite (or use) the buffer being sent.
BLOCKING vs
NON-BLOCKING --
Rephrasing:
NON_BLOCKING: An
Example
(Pseudocode)
2. call MPI_ISEND(a,.........,REQUEST1,...)
b=x2
c=y3
d=b+c
4. Block computation until it is safe to use a again.
call MPI_WAIT(REQUEST1,status)
e=a+b
a=d
COLLECTIVE MPI
ROUTINES
Barrier synchronization
Global communications
Broadcast
Gather
Scatter
BROADCAST ROUTINE
The broadcast MPI routine is one of the most commonly used collective routines.
The root process broadcasts the data in buffer to all the processes in the
communicator.
All processes must call MPI_BCAST with the same root value.
MPI_BCAST(buffer,count,datatype, root,comm,ierror)
buffer =initial address of buffer (choice)
Solution to Exercise 5
program template
.
.
!--Define new variables used
integer ip
real x,y
.
.
{MPI_COMM_RANK and MPI_COMM_SIZE CALLS......}
REFERENCES
To use as a general MPI reference:
M,Snir, et.al, MPI The Complete Reference, second
edition, MIT Press 1998.
Privacy/Security Warning
Author: NCCS Technical Assistance Group (TAG)
Authorizing Technical Official: W. Phillip Webster, Code 931, GSFC/NASA
Authorizing NASA Official: Nancy Palm, Branch Head, Code 931, GSFC/NASA
Last Updated:03/07/01
Reason for Change: New