next up previous index
Next: Greetings, Master Up: Simple MPI Previous: Simple MPI

   
Hello World

#include <stdio.h>
#include <mpi.h>

main(argc, argv)
int argc;
char *argv[];
{
        char name[BUFSIZ];
        int length;

        MPI_Init(&argc, &argv);
        MPI_Get_processor_name(name, &length);
        printf("%s: hello world\n", name);
        MPI_Finalize();
}

This is a program we have already seen before, when we talked about running MPI programs under LoadLeveler.

There is an MPI wrapper on the SP that takes care of includes and libraries. Compile and link this program in one step with:

gustav@sp20:../LoadLeveler 17:07:04 !513 $ mpcc mpi-hello.c -o mpi-hello
gustav@sp20:../LoadLeveler 17:07:23 !514 $
and run it by submitting the following LoadLeveler script:
gustav@sp20:../LoadLeveler 19:27:47 !647 $ cat mpi-hello.ll
# @ job_type = parallel
# @ environment = COPY_ALL; MP_EUILIB=ip; MP_INFOLEVEL=3
# @ requirements = (Adapter == "hps_ip")
# @ min_processors = 4
# @ max_processors = 8
# @ class = test
# @ notification = always
# @ executable = /usr/bin/poe
# @ arguments = mpi-hello
# @ output = mpi-hello.out
# @ error = mpi-hello.err
# @ queue
gustav@sp20:../LoadLeveler 19:27:49 !648 $ llsubmit mpi-hello.ll

If you want or need to exclude certain nodes from your processor pool add the following to the requirements directive:

( Machine != "sp18" ) && ( Machine != "sp20" )

When the job completes you should see something like:

gustav@sp20:../LoadLeveler 19:27:49 !648 $ cat mpi-hello.out
sp21.ucs.indiana.edu: hello world
sp19.ucs.indiana.edu: hello world
sp24.ucs.indiana.edu: hello world
sp20.ucs.indiana.edu: hello world
sp22.ucs.indiana.edu: hello world
sp23.ucs.indiana.edu: hello world
sp17.ucs.indiana.edu: hello world
sp18.ucs.indiana.edu: hello world
gustav@sp20:../LoadLeveler 19:29:11 !649 $
on your log file.

All MPI programs must begin with MPI_Init(&argc, &argv) and end with MPI_Finalize(). It is not an error to insert C or Fortran statements in front of MPI_Init or after MPI_Finalize, but MPI standard is not concerned with how such statements should be executed, if at all, e.g., on one processor, or on all of them. In short if you write a program like that, it will be unpredictable and non-portable. So, don't do it.

All MPI functions in C interface begin with MPI_X, where X stands for a capital letter that begins the proper name of the function, e.g., MPI_Get_processor_name. The latter is one of the functions from the chapter about Environmental Enquiries.

You seldom need to be concerned about the name of the processor your MPI process runs on. The reason why MPI Founding Fathers decided on this function at all is to allow for process migration. The idea is that a program may distribute itself over a number of workstations. If anyone of those workstations is requested back by its ``owner'', your parallel program will migrate the process that runs on it elsewhere. Then it may keep checking occasionally if the workstation is again available, and if it is, the process will be moved back.

In our short example, we use this function simply to demonstrate that the program indeed runs on multiple CPUs.

The printf statement assumes that all processes comprising an MPI program have access to standard output on ``MPI console'', that is your VDU, if you run the program interactively, or a file that LoadLeveler is going to write standard output on. This assumption may or may not be satisfied by the hardware and software your MPI program runs on. It is not a requirement of MPI standard.

There are systems where only some processes can do any IO at all, and sometimes only one process can do standard IO.

MPI provides means to check for that. By calling function

MPI_Attr_get
you can inspect the value of various MPI attributes that would have been generated dynamically when the program begins its execution. Amongst these are
MPI_HOST
which specifies the rank of the process that runs on a host machine. Some parallel computers must run off a host machine, e.g., the Connection Machine always had to be front-ended by a Sun, or by a VAX, and it was possible to run an MPI job in such a way that one of the processes would run on that front-end machine. That would be the host process.
MPI_IO
which specifies a rank of a node that has regular I/O. You can use this attribute so that every process can find on its own if it has I/O. Then processes can communicate that amongst themselves, find a group of processes that support regular I/O, and redirect all I/O through them.


next up previous index
Next: Greetings, Master Up: Simple MPI Previous: Simple MPI
Zdzislaw Meglicki
2001-02-26