0% found this document useful (0 votes)
37 views

Potential Approaches To Parallel Computation of Rayleigh Integrals in Measuring Acoustic Pressure and Intensity

Potential parallelization approaches for computing Rayleigh integrals include OpenMP, OpenMPI, PVM, and grid computing. OpenMP utilizes threads to parallelize regions of code on a single machine. OpenMPI uses message passing to parallelize tasks across multiple machines. PVM similarly uses message passing and provides transparency across architectures. Grid computing abstracts hardware details using middleware to direct tasks to distributed resources.

Uploaded by

blhblh123
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Potential Approaches To Parallel Computation of Rayleigh Integrals in Measuring Acoustic Pressure and Intensity

Potential parallelization approaches for computing Rayleigh integrals include OpenMP, OpenMPI, PVM, and grid computing. OpenMP utilizes threads to parallelize regions of code on a single machine. OpenMPI uses message passing to parallelize tasks across multiple machines. PVM similarly uses message passing and provides transparency across architectures. Grid computing abstracts hardware details using middleware to direct tasks to distributed resources.

Uploaded by

blhblh123
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Potential Approaches to Parallel

Computation of Rayleigh Integrals in Measuring


Acoustic Pressure and Intensity
 Problem Definition

The acoustic pressure (Pa) and intensity (W/cm 2) in an acoustic field can be measured at any
instance in time and space by applying the Rayleigh diffraction integral depicted below:


❑ ( Rc ) dS
v t−

S R
Barring details, the integral calculation can form a single function, which takes certain
parameters.

Since multiple calculations of the integral are required and are independent of each other,
parallelizing these calculations can prove to be quite beneficial in terms of speedup. The same
function is repeatedly applied to various data points and as such, the most suitable form of
parallelization that can be applied is single program multiple data (SPMD). Given the lack of
dependencies in the algorithm however, marks the task of parallelization as especially trivial.
 Open Multiprocessor Programming (OpenMP)

Utilizing multiple threads represents the simplest solution to the problem of concurrently
computing the Rayleigh integral with various parameters. It is also a low-level approach to
parallelization, where the algorithm source code is directly modified and executed to perform
the task in parallel. OpenMP implements it’s own specification for threads or, what have been
termed, lightweight processes (LWPs) by providing compiler directives and library routines as
compile-time and runtime tools for the programmer to parallelize certain regions of code. There
are also constructs and clauses for critical regions and restricting access to variables that may
cause race conditions, which are not controlled by OpenMP.

LWPs are spawned in parallel regions by the main controlling thread and assigned a certain
workload, dependant on the work-sharing construct used. In the example code below, on a 4-
core CPU, 4 threads would be spawned and each thread would be assigned the task of
completing 2500 iterations:

#pragma omp parallel for private (i)


for (i = 0; i < 10000; i++) {
a[i] = i*1000;
}

Note that the variable i is automatically made private to each thread and that the private clause
is included only for illustrative purposes to explicitly indicate that i is private to each thread.
Currently, OpenMP interfaces exist for C, C++ and Fortran.

o Benefits and drawbacks1

The primary benefit OpenMP is that the sequential algorithm requires minor
modifications to exploit any parallel regions. This reduces probability for errors that may
occur in the parallelization process. Simplicity of the syntactical conventions of OpenMP
also provides another advantage.

On the other hand, a particular drawback of OpenMP stems from the characteristics of
shared memory architecture. Since memory accesses occupy the same bandwidth for all
CPUs, performance degradation can occur as the number of CPUs is increased beyond a
certain limit. OpenMP is also suited largely for symmetric multiprocessor (SMP)
environments, which are the most common with the multi-core CPUs available today. It
does not include support for distributed computing, to exploit the potential of more
than one physical machine.

1
“Reap the Benefits of Multithreading without All the Work”. MSDN Magazine. October, 2005.
http://msdn.microsoft.com/en-ca/magazine/cc163717.aspx
 Open Message Passing Interface (OpenMPI)

Message passing permits the exchange of information between multiple physical machines. Due
to the numerous protocols and details involved in exchanging information between computers,
a general approach that hides these aspects from the developer is more favourable. OpenMPI is
the implementation of MPI-1 and MPI-2 specifications that abstracts the particulars of the
underlying architectures by providing functions such as MPI_SEND() and MPI_RECV() to send
and receive information. This allows the developers to focus on task at hand, rather than
concentrating on the protocols (UDP, TCP, Ethernet, etc.) or the architecture of the machine
(byte order, instruction set, etc.).

o Benefits and drawbacks2

OpenMPI is well-supported by large corporations such as Cisco, IBM and Oracle and is
utilized in many of the supercomputers in the list of top 500 supercomputers in the
world, while remaining open source. The major advantage of OpenMPI is that it is
designed with high performance in mind, to exploit the maximum bandwidth available
with the lowest latency possible. It also has robust support for numerous connectivity
options, whether Ethernet or InfiniBand or serial cables are used to connect the
network. Furthermore, OpenMPI is highly scalable.

There are, however, interoperability issues between other MPI implementations. In


some cases, connecting to machines that are using another implementation of MPI is
difficult or impossible. Also, parallelization of tasks using OpenMPI is not as simple,
when compared to OpenMP and threads. This is, of course, natural, since multiple
machines are involved as compared to one machine. Another drawback is the lack of
support for fault-tolerance in the network.

 Parallel virtual machine (PVM)

PVM and OpenMPI are similar in that messages are exchanged from each machine. However,
the communication support in PVM is not as extensive as OpenMPI, nor does it deliver the same
level of performance. Although the performance is still comparable, OpenMPI remains faster
than PVM. PVM excels though, in unifying the variety of architectures that it runs on, into one
parallel virtual machine. In this sense, the composite nature of the machines underneath
becomes transparent and unnecessary for the development of distributed software. Each
physical machine runs a daemon process called pvmd, which allows inclusion of that machine
into the network. Software written for the PVM, accesses its features through a supplemented
API in C, C++ or Fortran with libpvm.

o Benefits and drawbacks3

2
“What is MPI? What is OpenMPI?”, http://www.open-mpi.org/video/general/what-is-%5Bopen%5D-mpi-2up.pdf
3
“PVM and MPI: A Comparison of Features” by Geist, Kohl and Papadopoulos. Calculateurs Paralleles Vol. 8 No. 2
(1996).
If very specific communication capabilities (e.g., non-blocking send) are required that
PVM does not support, OpenMPI may be a more suitable choice.

Otherwise, PVM is scalable and portable to various architectures. The added


transparency of one large PVM aids the development process by hiding unnecessary
aspects of the system details. Specifically, it is well-suited for cluster computing where
nodes are tightly-coupled and different clusters may perform different tasks.

 Grid computing

Grids represent the highest-level approach and also the most flexible. A computational grid is
defined as a loosely-coupled network of computers that are generally, dispersed over different
geographical regions and need not be of the same type of architecture. Therefore, a grid’s
architecture is heterogeneous in nature but remains transparent to the developers. Just as
compute cycles are shared, so too are file systems and memory and combine to form one large
address space. With the addition of middleware, on top of the communication mechanism (such
as MPI), which itself sits atop the operating system, grid computing abstracts to a high degree
the details that may detract from actual development of distributed software. Furthermore,
given the unified nature of grids, they are capable of directing compute-heavy tasks to a
common goal.

o Benefits and drawbacks4

The greatest is asset of grid computing is the robust features afforded by the
middleware to the developer. These include: workload management, job flow and data
management. Workload management facilitates distribution of workload to specific
nodes or a cluster of nodes, updating job status and collecting output. The executable
software is wrapped in the concept of a job, which is described with the job description
language (JDL) and specifies the type of job, number of compute elements required and
which stream to collect output from. Data management provides functions for file input
and output.

Compared to the other approaches, grid computing is more complex. However, the
rewards may be greater and, possibly, more could be accomplished, since low-level
details are already in place. A potential drawback of grids is the lack of portability
between other middleware, since there is no established standard for creating
distributed software.

4
“gLite 3.1 User Guide”. https://edms.cern.ch/file/722398/1.3/gLite-3-UserGuide.html

You might also like