next up previous index
Next: The Code Up: Not So Simple MPI Previous: The Discussion

Manipulating Communicators

So far we haven't done any explicit manipulation of communicators. In our example program that set up a communication framework for finite difference codes (e.g., a Laplace solver) we have used a powerful wrapper, MPI_Cart_create, that took care of all details associated with creating a new communicator.

In the following code we're going to do this ourselves, although the new communicator will not have a topology associated with it.

The general synopsis for the code that follows is simple. We're going to have one process responsible for generating sequences of random numbers. The remaining processes, including the one responsible for I/O, will form a new communicator within which computations on those sequences will be carried on.

Having received a sequence from the random number generator a worker process is going to convert those to random x and y coordinates within a square $[-1,1]\times[-1,1]$. Some points generated thusly may live outside a circle with its centre on (0,0) and radius 1, and some may live inside it. If a point is within the circle we add 1 to the in counter, otherwise we add 1 to the out counter. The number of randomly generated points that fall into the circle is going to be proportional to $\pi$, whereas the number of points that fall outside the circle is going to be proportional to $4 - \pi$, with the total number of points being proportional to 4, because the surface area of a square $[-1,1]\times[-1,1]$ is 4, and the surface area of a circle of radius 1 is $\pi$. Therefore $4 \times \hbox{{\tt in}} / \left(\hbox{{\tt in}} + \hbox{{\tt out}}\right) \approx \pi$.

Every process can evaluate this on its own. But if they were to pool their own results, we would end up having a lot more data and thus our evaluation of $\pi$ would be that much more accurate.



 
next up previous index
Next: The Code Up: Not So Simple MPI Previous: The Discussion
Zdzislaw Meglicki
2001-02-26