# Introduction to MPI

Aveuh
58.2K views

## Reductions - exercise 1 : The central limit theorem

The central limit theorem is a very famous theorem in statistics and probability. Very roughly, the CLT states that if you add multiple independant random variable together and repeat this process a certain number of times, the ending distribution will be close to a normal distribution (a bell shaped curve). We propose to illustrate this using a parallel program that will generate random numbers and add them !

A normal distribution follows the famous bell-shaped Gaussian curve.

For this, we will consider process that will manage random variables. The basic algorithm will be the following : every process will draw a number () of random variables. These variables will be based on the uniform generator of C++. Once every process has generated all these random variables, we will use a reduction operation to sum all the independent variables on one process. At the end of the operation, the process 0 should be left with a summed table of elements. The rest of the stub and the runner will take care of checking the result of your computation.

To improve the quality of the result, the first process is repeated over times internally for every process in order to generate more variables. You will only have to find the right command for the reduction.

### Reduction

The reduction is done using the MPI_Reduce call. The prototype of this function is :

int MPI_Reduce(void* send_data, void* recv_data, int count, MPI_Datatype type, MPI_Op op, int root, MPI_Comm communicator);


Remember that the root is the process on which the reduction result will be stored. The operation is either MPI_PROD, MPI_SUM, MPI_MIN or MPI_MAX (should be obvious by now).

Central limit theorem
This playground was created on Tech.io, our hands-on, knowledge-sharing platform for developers.
#include <iostream>
#include <cstdlib>
#include <mpi.h>
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
int size, rank;
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
constexpr int buffer_count = 5000;
float buffer[buffer_count];
memset(buffer, 0, sizeof(buffer));
// Uniform sampling, generating the numbers and doing 1000 repetitions
for (int rep=0; rep < 1000; ++rep) {
for (int i=0; i < buffer_count; ++i) {
float val = (float)rand() / RAND_MAX;
buffer[i] += val;
}
}
// TODO : create a buffer called reception and call MPI_Reduce to sum all the variables
// over all the processes and store the result on process 0.
// In the end, you should have buffer_count variables.
// Now we print the results
if (rank == 0) {
for (int i=0; i < buffer_count; ++i)
std::cout << reception[i] << std::endl;
}
MPI_Finalize();
return 0;
}
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Join the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff!
Online Participants