Next: Bank Queue Up: Simple MPI Previous: Greetings, Master

## Dividing the Pie

Sending congratulatory messages is all very well, but it makes few people happy other than the Programmer and the master process.

So here is an example of a very simple program that calculates . This, at least, should make some high school teachers happy.

#include <stdio.h>
#include <mpi.h>
#define FALSE 0
#define TRUE  1
#define MASTER_RANK 0

double f(a)
double a;
{
return (4.0 / (1.0 + a*a));
}

int main ( int argc, char **argv )
{
int n, i, pool_size, my_rank, i_am_the_master = FALSE;
double mypi, pi, h, sum, x, a;

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &pool_size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

if (my_rank == MASTER_RANK) i_am_the_master = TRUE;

if (i_am_the_master) {
printf("Enter the number of intervals: ");
scanf("%d",&n);
if (n==0) n=100;
}

MPI_Bcast(&n, 1, MPI_INT, MASTER_RANK, MPI_COMM_WORLD);

h   = 1.0 / (double) n;
sum = 0.0;
for (i = my_rank + 1; i <= n; i += pool_size) {
x = h * ((double)i - 0.5);
sum += f(x);
}
mypi = h * sum;

MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, MASTER_RANK,
MPI_COMM_WORLD);

if (i_am_the_master) printf("\npi is approximately %.16f\n", pi);

MPI_Finalize ();
}


Here is how this program compiles and runs:

gustav@sp19:../MPI 21:56:02 !520 $mpcc -o pi pi.c gustav@sp19:../MPI 21:56:59 !521$ cat pi.ll
# @ shell = /afs/ovpit.indiana.edu/@sys/gnu/bin/bash
# @ job_type = parallel
# @ environment = COPY_ALL; MP_EUILIB=ip; MP_INFOLEVEL=2
# @ requirements = (Adapter == "hps_ip") && (Machine != "sp20") \
&& (Machine != "sp18")
# @ min_processors = 4
# @ max_processors = 8
# @ output = pi.out
# @ error = pi.err
# @ class = test
# @ queue
poe pi << EOF
300
EOF
gustav@sp19:../MPI 21:57:08 !522 $llsubmit pi.ll submit: The job "sp19.107" has been submitted. gustav@sp19:../MPI 21:57:22 !523$ cat pi.out
Enter the number of intervals:
pi is approximately 3.1415935795157193
gustav@sp19:../MPI 21:57:59 !524 \$


The synopsis of the program:

• This program evaluates by numerically evaluating the integral

• The master process reads number of intervals from standard input, this number is then broadcast to the pool of processes.
• Having received the number of intervals, each process evaluates the total area of rectangles under the curve.
• The contributions to the total area under the curve are collected from participating processes by the master process, which at the same time adds them up, and prints the result on standard output.

Now let us discuss the program in more detail.

Assume, as before, that you are one of the processes. So first you find out about the number of processes in the pool and then about your own rank number. Then you check if you happen to have been annointed to be the Master, and if you are, then you communicate with The User:

   if (i_am_the_master) {
printf("Enter the number of intervals: ");
scanf("%d",&n);
if (n==0) n=100;
}


Having obtained from The User the total number of intervals, you broadcast that number to all other processes in the pool thusly:

   MPI_Bcast(&n, 1, MPI_INT, MASTER_RANK, MPI_COMM_WORLD);


Now everybody gets down to work. Every process finds the width of an interval:

   h   = 1.0 / (double) n;

Initialises its own sum to zero, and commences the following computation:
   for (i = my_rank + 1; i <= n; i += pool_size) {
x = h * ((double)i - 0.5);
sum += f(x);
}

What happens here is as follows. First you find the value of x at the middle of an interval. Then evaluate 4/(1 + x2) and add it to whatever has already been accumulated in your sum. Then you jump over to another interval, which is pool_size intervals to the right and repeat the operation. In the meantime other processes will work on their own intervals, and when all is said and done, each of you will have a portion of the total sum in your local sum. In order to convert that portion of the total sum into a portion of the total integral, you need to multiply that local sum by the width of the interval, which is h:
   mypi = h * sum;

What you have at this stage is not the real . It is the portion of . All those portions have to be added together to get the real . This is done by sending your contributions to the master, who performs the addition. The operation that accomplishes this is the reduction operation:
   MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, MASTER_RANK,
MPI_COMM_WORLD);

In plain English the meaning of the above is as follows:
All processes in the MPI_COMM_WORLD, including the master process, send one item of type MPI_DOUBLE from their buffer called mypi to the process whose rank is MASTER_RANK, i.e., to the master process. The master process performs the MPI_SUM operation on those contributions, i.e., adds them all together, and writes the result on its own buffer called pi. The master process is the only one here, who knows the final outcome of the operation.
And so, the master process is also the one that writes it on standard output.

MPI provides a number of predefined reduction operations that can be used in this context, and you can define your own operations too. The predefined ones are:

 MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR MPI_MAXLOC MPI_MINLOC

Next: Bank Queue Up: Simple MPI Previous: Greetings, Master
Zdzislaw Meglicki
2001-02-26