0% found this document useful (0 votes)
2 views

PDC Lab 2-5

The document provides an overview of parallel and distributed computing, covering concepts such as OpenMP for parallel programming in C, parallel reduction for global sum, and matrix-vector multiplication. It also discusses the Monte Carlo method for estimating π using MPI, parallel sorting algorithms, and the efficient bitonic sort. Various code examples illustrate these concepts and their implementations.

Uploaded by

Anmol Agarwal
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

PDC Lab 2-5

The document provides an overview of parallel and distributed computing, covering concepts such as OpenMP for parallel programming in C, parallel reduction for global sum, and matrix-vector multiplication. It also discusses the Monte Carlo method for estimating π using MPI, parallel sorting algorithms, and the efficient bitonic sort. Various code examples illustrate these concepts and their implementations.

Uploaded by

Anmol Agarwal
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

Introduction to Parallel and Distributed Computing

Theory

Parallel computing involves executing multiple tasks simultaneously to improve performance.


OpenMP is a widely used API for parallel programming in C.

Code:

#include <stdio.h>

#include <omp.h>

int main()

#pragma omp parallel

printf("Hello from thread %d\n", omp_get_thread_num());

return 0;

2. Parallel Global Sum & Matrix-Vector Multiplication (OpenMP)

Theory

Global sum is calculated using parallel reduction. Matrix-vector multiplication involves multiplying
a matrix with a vector to produce another vector.

Code:

#include <stdio.h>

#include <omp.h>

int main()

int arr[4] = {1, 2, 3, 4}, sum = 0, i;

#pragma omp parallel for reduction(+:sum)

for (i = 0; i < 4; i++) sum += arr[i];

printf("Sum: %d\n", sum);

int mat[2][2] = {{1, 2}, {3, 4}}, vec[2] = {1, 1}, res[2] = {0};

#pragma omp parallel for

for (i = 0; i < 2; i++)

for (int j = 0; j < 2; j++)


res[i] += mat[i][j] * vec[j];

printf("Matrix-Vector: %d %d\n", res[0], res[1]);

return 0;

3. Summation & Matrix-Vector Multiplication (OpenMP)

Theory

Summation is done using parallel reduction, and matrix-vector multiplication follows the row-wise
approach.

Code:

#include <stdio.h>

#include <omp.h>

int main()

int nums[4] = {1, 2, 3, 4}, total = 0;

#pragma omp parallel for reduction(+:total)

for (int i = 0; i < 4; i++) total += nums[i];

printf("Sum: %d\n", total);

int mat[2][2] = {{1, 2}, {3, 4}}, vec[2] = {1, 1}, res[2] = {0};

#pragma omp parallel for

for (int i = 0; i < 2; i++)

for (int j = 0; j < 2; j++)

res[i] += mat[i][j] * vec[j];

printf("Result: %d %d\n", res[0], res[1]);

return 0;

4. Parallel Computation of PI & MPI Basics


Theory

Monte Carlo method is used to estimate π. MPI distributes computations among multiple
processes.

Code:

#include <stdio.h>

#include <stdlib.h>

#include <mpi.h>

int main(int argc, char *argv[])

int rank, size, inside = 0, total = 10000;

double x, y, pi;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);

MPI_Comm_size(MPI_COMM_WORLD, &size);

for (int i = 0; i < total / size; i++)

x = (double)rand() / RAND_MAX;

y = (double)rand() / RAND_MAX;

if (x * x + y * y <= 1) inside++;

int global_inside;

MPI_Reduce(&inside, &global_inside, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

if (rank == 0) {

pi = 4.0 * global_inside / total;

printf("Estimated PI: %f\n", pi);

}
MPI_Finalize();

return 0;

5. Parallel Sorting using MPI & OpenMP

Theory

Sorting algorithms can be parallelized using OpenMP or MPI to divide the data among multiple
threads or processes.

Code:

#include <stdio.h>

#include <mpi.h>

void sort(int a[], int n)

for (int i = 0; i < n - 1; i++)

for (int j = 0; j < n - i - 1; j++)

if (a[j] > a[j + 1]) { int t = a[j]; a[j] = a[j + 1]; a[j + 1] = t; }

int main(int argc, char *argv[])

int rank, size, n = 8, a[8] = {3, 7, 4, 8, 6, 2, 1, 5}, local[4];

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);

MPI_Comm_size(MPI_COMM_WORLD, &size);

MPI_Scatter(a, 4, MPI_INT, local, 4, MPI_INT, 0, MPI_COMM_WORLD);

sort(local, 4);

MPI_Gather(local, 4, MPI_INT, a, 4, MPI_INT, 0, MPI_COMM_WORLD);

if (rank == 0) { sort(a, 8); for (int i = 0; i < 8; i++) printf("%d ", a[i]); }
MPI_Finalize();

return 0;

6. Exploring Parallel Sorting Algorithms (MPI & OpenMP)

Theory

Bitonic sort is an efficient parallel sorting algorithm that works well with parallel programming
models.

Code:

#include <stdio.h>

#include <omp.h>

#define N 8

void bitonicSort(int a[], int low, int cnt, int dir)

if (cnt > 1) {

int k = cnt / 2;

for (int i = low; i < low + k; i++)

if ((a[i] > a[i + k]) == dir) { int t = a[i]; a[i] = a[i + k]; a[i + k] = t; }

bitonicSort(a, low, k, dir);

bitonicSort(a, low + k, k, dir);

int main() {

int a[N] = {3, 7, 4, 8, 6, 2, 1, 5};

#pragma omp parallel

bitonicSort(a, 0, N, 1);

for (int i = 0; i < N; i++) printf("%d ", a[i]);

return 0;

You might also like