0% found this document useful (0 votes)
3 views

TP3

The document outlines exercises for implementing parallel programming using OpenMP, focusing on various concepts such as work distribution, ordered execution, exclusive execution, barrier synchronization, load balancing, and synchronization in the producer-consumer problem. Each exercise includes specific tasks to be completed, such as computing sums, managing thread execution order, and comparing performance metrics. The document also provides example code snippets to illustrate the implementation of these exercises.

Uploaded by

Mohi Gpt4
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

TP3

The document outlines exercises for implementing parallel programming using OpenMP, focusing on various concepts such as work distribution, ordered execution, exclusive execution, barrier synchronization, load balancing, and synchronization in the producer-consumer problem. Each exercise includes specific tasks to be completed, such as computing sums, managing thread execution order, and comparing performance metrics. The document also provides example code snippets to illustrate the implementation of these exercises.

Uploaded by

Mohi Gpt4
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Mohammed VI Polytechnic University

TP3 - OpenMP (Parallel Sections, Single, Master,


Synchronization)
Imad Kissami
March 6, 2025

Exercise 1: Work Distribution with Parallel Sections


1. Write a program that initializes an array of size N with random values.

2. Use #pragma omp sections to divide the work:

• Section 1: Compute the sum of all elements.


• Section 2: Compute the maximum value.
• Section 3: Compute the standard deviation.

3. Ensure that all computations run in parallel.

Exercise 2: Ordered Execution with Single


1. Implement an OpenMP program where multiple threads generate numbers in parallel.

2. Only one thread at a time should print a number using single.

3. The numbers must be printed in ascending order.

Example Output (using 4 threads):


Thread 3 generated value: 12
Thread 1 generated value: 27
Thread 2 generated value: 34
Thread 0 generated value: 89

Exercise 3: Exclusive Execution - Master vs Single


1. Write a program where:

• A master thread initializes a matrix.


• A single thread prints the matrix.
• All threads compute the sum of all elements in parallel.

2. Compare execution time with and without OpenMP.


2

Exercise 4: Barrier Synchronization


1. Implement a program where multiple threads execute different stages of computation:

• Stage 1: Read input data (only one thread should do this).


• Stage 2: All threads process the data in parallel.
• Stage 3: A single thread writes the final result.

2. Use barrier to enforce correct execution order.

Exercise 5: Load Balancing with Parallel Sections


1. Implement a task scheduling mechanism using parallel sections.

2. Simulate three different workloads:

• Task A (light computation)


• Task B (moderate computation)
• Task C (heavy computation)

3. Measure the execution time and optimize the workload distribution.

Exercise 6: Critical vs Atomic for Shared Counters


1. Implement a counter that multiple threads increment simultaneously.

2. Test the program using both:

• critical section
• atomic directive

3. Compare performance and explain the differences.

Exercise 7: Producer-Consumer Problem


• A producer generates values and places them in a shared buffer.

• A consumer retrieves these values and processes them.

• The two entities must be synchronized to avoid race conditions.

# include <stdio.h>
# include <stdlib.h>
# include <omp.h>

# define N 1000000

void fill_rand(int n, double *A) {


for (int i = 0; i < n; i++)
A[i] = rand () % 100;
}

double Sum_array(int n, double *A) {


double sum = 0.0;
for (int i = 0; i < n; i++)
3

sum += A[i];
return sum;
}

int main () {
double *A, sum , runtime;
int flag = 0; // Variable de synchronisation

A = (double *) malloc(N * sizeof(double ));

runtime = omp_get_wtime ();

fill_rand(N, A); // Producteur remplit le tableau

sum = Sum_array(N, A); // Consommateur calcule la somme

runtime = omp_get_wtime () - runtime;

printf("In␣%lf␣seconds ,␣the␣sum␣is␣%lf\n", runtime , sum);

free(A);
return 0;
}

You might also like