Week 10
Week 10
Basics
Proces Proces
Communication
s s
System
Proces (MPI) Proces
s s
Proces Proces
s Proces s
s
The Message-Passing Model
• A process is (traditionally) a program counter and
address space
• Inter-process communication
consists of:
– Synchronization
Types of Parallel Computing Models
1. Data Parallel
—Same instructions are carried out simultaneously on
multiple data items (e.g., SIMD)
2. Task Parallel
—Different instructions on different data (e.g., MIMD)
communication network
The Message-Passing Programming Paradigm
• A process is a program performing a task on a processor
communication network
Data and Work Distribution
• To communicate together MPI-processes
need identifiers: rank = identifying number
myrank
myrank= myrank= myrank= =
0 data 1 data 2 data (size-1)
data
communication
nMePtIwCoourrkes
MPI Fundamentals
• A communicator defines a group of processes
that have the ability to communicate with one
another
Every process in 4 45 56 7
a communicator
has an ID called 6 7
as “rank”
The same process might
have different ranks in
different communicators
• MPI_COMM_WORLD is predefined
int MPI_Finalize( )
• Must call at the end of the computation (by the
main thread)
• performs various clean-up tasks to terminate MPI
environment
• The host file contains names of all of the nodes on which your MPI
job will execute
0 2 message
1
4 3 5 destination
source
MPI Course
6
Point to Point Communication
• Communication is done using send and receive
among processes:
– To send a message, sender provides the rank of the
process and a unique tag to identify the message
– The receiver can then receive a message with a given
tag (or it may not even care about the tag), and then
handle the data accordingly
sendbuf
Sending process
sysbuf
waits until all data
are transferred to MPI_Send(sendbuf,1)
Copying data from
the system buffer (blocked)
sendbuf to sysbuf
Data
• Blocking operation
• When MPI_Send returns the message is sent and the data
buffer can be reused (the message may not have been received
by the target process yet)
MPI_Recv
MPI_Recv(void* data,int count,MPI_Datatype type, int
source,int tag,MPI_Comm comm,MPI_Status* status)
data: pointer to data
count: number of elements to be received (upper bound)
type: data type
source: source process of the message
tag: identifying tag
comm: communicator
status: i.e., sender, tag, and message size
MPI_Finalize();
The MPI_Status structure
• Information is returned from MPI_RECV in status
MPI_RECV does not return until buffer is full (available for use)
sendbuf
sysbuf
MPI_Isend(sendbuf,1,req)
(blocked)
Send data from
Now sendbuf can be reused
sysbuf to dest
Data
User Mode Kernel Mode
• Advantages:
– No deadlocks (using MPI_Test for completion check)
– Overlap communication with computation
– Exploit bi-directional communication
Non-Blocking Send and Receive (Cont.)
MPI_WAIT (request, status) Demo:
P2PNonBlock.
MPI_TEST (request, flag, status) c
tag = 1234;
destination = 1;
count = 1;
MPI_Status
status;
MPI_Request
request =
MPI_REQUEST_NULL
;
Non-blocking Message Passing - Example
if (rank == 0) { /* master process
*/ buffer = 9999;
MPI_Isend(&buffer,count,MPI_INT,de
stination,tag,MPI_COMM_WORLD,
&request);
}
receiver) if (rank == 0)
printf("process %d sent %d\n", rank, buffer);
if (rank == destination)
printf("process %d rcv %d\n", rank, buffer)
MPI_Finalize(
Non-blocking Message Passing - Example
MPI_Probe
• Instead of posting a receive and simply providing a really
large buffer to handle all possible sizes of messages
// When probe returns, the status object has the size and other
// attributes of the incoming message. Get the message size
MPI_Get_count(&status, MPI_INT, &number_amount);
Demo:
messageProbe.c
Any Questions