0% found this document useful (0 votes)
14 views

Message Passing Interface (MPI)

Vcxxxz

Uploaded by

bilqesahmed60
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Message Passing Interface (MPI)

Vcxxxz

Uploaded by

bilqesahmed60
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

University of Al Mosul

College of Engineering - Computer Department


2024-2025

Introduction to the
Message Passing
Interface (MPI)
What is MPI?
• MPI stands for Message Passing Interface and is a library specication
for message-passing, proposed as a standard by a broadly based
committee of vendors, implementors, and users.
• MPI consists of
1- a header le mpi.h
2- a library of routines and functions, and
3- a runtime system.
• MPI is for parallel computers, clusters, and heterogeneous networks.
• MPI is full-featured.
• MPI is designed to provide access to advanced parallel hardware for
end users
library writers
tool developers
• MPI can be used with C/C++, Fortran, and many other languages.
MPI is an API
MPI is actually just an Application Programming Interface (API). As such,
MPI
• species what a call to each routine should look like, and how each
routine should behave, but
• does not specify how each routine should be implemented, and
• sometimes is intentionally vague about certain aspects of a routines
behavior;
• implementations are often platform vendor specic, and
• has multiple open-source and proprietary implementations.
Example MPI routines
• The following routines are found in nearly every program that uses
MPI:
• MPI_Init() starts the MPI runtime environment.
• MPI_Finalize() shuts down the MPI runtime environment.
• MPI_Comm_size() gets the number of processes, Np.
• MPI_Comm_rank() gets the process ID of the current process which is
• between 0 and Np 􀀀 1, inclusive.
• (These last two routines are typically called right after MPI_Init().
More example MPI routines
Some of the simplest and most common communication routines are:
• MPI_Send() sends a message from the current process to another
process (the destination).
• MPI_Recv() receives a message on the current process from another
process (the source).
• MPI_Bcast() broadcasts a message from one process to all of the
others.
• MPI_Reduce() performs a reduction (e.g. a global sum, maximum, etc.)
of a variable in all processes, with the result ending up in a single
process.
• MPI_Allreduce() performs a reduction of a variable in all processes,
with the result ending up in all processes.
MPI Hello world: hello.c
# include <stdio .h>
# include <mpi.h>
int main ( int argc , char * argv [] )
{
int rank ;
int number_of_processes ;
MPI_Init ( &argc , & argv );
MPI_Comm_size ( MPI_COMM_WORLD , & number_of_processes );
MPI_Comm_rank ( MPI_COMM_WORLD , & rank );
printf ( " hello from process %d of %d\n",rank , number_of_processes );
MPI_Finalize ();
return 0;
}
MPI Hello world output
Running the program produces the output
hello from process 3 of 8
hello from process 0 of 8
hello from process 1 of 8
hello from process 7 of 8
hello from process 2 of 8
hello from process 5 of 8
hello from process 6 of 8
hello from process 4 of 8
Note:
• All MPI processes (normally) run the same executable
• Each MPI process knows which rank it is
• Each MPI process knows how many processes are part of the same job
• The processes run in a non-deterministic order
Communicators
• Recall the MPI initialization sequence:
MPI_Init ( &argc , & argv );
MPI_Comm_size ( MPI_COMM_WORLD , & number_of_processes );
MPI_Comm_rank ( MPI_COMM_WORLD , & rank );
• MPI uses communicators to organize how processes communicate
• with each other.
• A single communicator, MPI_COMM_WORLD, is created by MPI_Init() and
• all the processes running the program have access to it.
• Note that process ranks are relative to a communicator. A program
• may have multiple communicators; if so, a process may have multiple
• ranks, one for each communicator it is associated with.
MPI is (usually) SPMD
• Usually MPI is run in SPMD (Single Program, Multiple Data) mode.
• (It is possible to run multiple programs, i.e. MPMD).
• The program can use its rank to determine its role:

const int SERVER_RANK = 0;

if ( rank == SERVER_RANK )
{
/* do server stuff */
}
else
{
/* do compute node stuff */
}
• as shown here, often the rank 0 process plays the role of server or
• process coordinator.
A second MPI program: greeting.c
The next several slides show the source code for an MPI program that
works on a client-server model.
• When the program starts, it initializes the MPI system then
determines if it is the server process (rank 0) or a client process.
• Each client process will construct a string message and send it to the
server.
• The server will receive and display messages from the clients
one-by-one.
greeting.c: main
# include <stdio .h>
# include <mpi.h>
const int SERVER_RANK = 0;
const int MESSAGE_TAG = 0;
int main ( int argc , char * argv [] )
{
int rank , number_of_processes ;
MPI_Init ( &argc , & argv );
MPI_Comm_size ( MPI_COMM_WORLD , & number_of_processes );
MPI_Comm_rank ( MPI_COMM_WORLD , & rank );
if ( rank == SERVER_RANK )
do_server_work ( number_of_processes );
else
do_client_work ( rank );
MPI_Finalize ();
return 0;
}
greeting.c: server
greeting.c: client
MPI Send()
MPI Recv()
Message Passing Interface (MPI)
challenges
• One of the main challenges of using MPI is the need for careful programming to ensure that the
message passing operations are properly synchronized and coordinated between the different
processes

• Need for efficient use of network resources, as the performance of MPI applications can be
heavily dependent on the speed and reliability of the underlying network infrastructure. This
requires careful tuning of the MPI configuration and optimization of the application code to
minimize communication overheads and maximize parallelism.

• debugging and testing MPI applications can be challenging, as errors in message passing can be
difficult to diagnose and reproduce.
Message Passing Interface (MPI)
Performance
• MPI is designed to be efficient and scalable, meaning it can be used to
achieve high performance on a variety of different systems. It is also
designed to be fault tolerant, meaning it can be used to create reliable
distributed-memory parallel programs.

• MPI is also designed to be flexible, meaning it can be used to create a variety


of different distributed-memory parallel programs. This makes it an ideal
choice for those looking to develop distributed-memory parallel programs.
MPI vs OpenMP

• MPI (Message Passing Interface) is a library specification for message-


passing between processes. It provides a parallel computing
environment that allows programs to be written independently of the
underlying hardware.

• OpenMP (Open Multi-Processing) is an API that supports multi-


platform shared memory multiprocessing programming in C, C++ and
Fortran on most architectures, including Unix and Windows platforms.
Limitations of OpenMP

• MPI is a complex library, and it can be difficult to learn and use. It is also limited by the underlying

hardware, and it can be difficult to scale to large clusters of computers.

• MPI is also limited to message-passing between processes, so it is not well-suited for applications

that require frequent communication between threads on the same node.

• OpenMP is limited to shared memory architectures, so it is not well-suited for distributed memory

systems. It is also limited to C, C++, and Fortran, so it cannot be used to write programs in other

languages.

• OpenMP is also limited to programs that run on a single computer with multiple processors or

cores, so it cannot be used to write programs that run on multiple computers.


Advantages of MPI

• MPI offers a high degree of portability, scalability, and fault tolerance. It is well-suited for
distributed memory systems and can be used to write programs that run on multiple computers.

• MPI is designed to be efficient and reliable, and it offers a wide range of communication
mechanisms, including point-to-point, collective, and one-sided communication.

• MPI offers a high degree of portability, scalability, and fault tolerance. It is well-suited for
distributed memory systems and can be used to write programs that run on multiple computers.

• MPI is designed to be efficient and reliable, and it offers a wide range of communication
mechanisms, including point-to-point, collective, and one-sided communication.
Conclusion

MPI and OpenMP are both powerful tools for parallel computing, but
they have different strengths and weaknesses. MPI is well-suited for
distributed memory systems, while OpenMP is better for shared
memory architectures.

Both MPI and OpenMP can be used to write programs that run on
multiple computers, but OpenMP is limited to C, C++, and Fortran,
while MPI is more portable and can be used to write programs in other
languages.
Reference
Some material used in creating these slides comes from
• MPI Programming Model: Desert Islands Analogy by Henry Neeman,
University of Oklahoma Supercomputing Center.
• An Introduction to MPI by William Gropp and Ewing Lusk, Argonne
National Laboratory.

You might also like