next up previous index
Next: Greetings, Master Up: Basic MPI Previous: Basic MPI

   
Hello World

Program hellow2, which you have executed on the MPICH2 engine in section 5.1.5 (hopefully), is a very slight modification of a similar program distributed with MPICH2 source. We are going to have a look at this program here and I am also going to show you how to compile it, link and install.

Here is the program:

gustav@bh1 $ cat hellow2.c
/*
 * Find about the size of the communicator, your rank within it, and the
 * name of the processor you run on. 
 *
 * %Id: hellow2.c,v 1.1 2003/09/29 15:58:12 gustav Exp %
 *
 * %Log: hellow2.c,v %
 * Revision 1.1  2003/09/29 15:58:12  gustav
 * Initial revision
 *
 *
 */

#include <stdio.h>  /* printf and BUFSIZ defined there */
#include <stdlib.h> /* exit defined there */
#include <mpi.h>    /* all MPI-2 functions defined there */

int main(argc, argv)
int argc;
char *argv[];
{
   int rank, size, length;
   char name[BUFSIZ];

   MPI_Init(&argc, &argv);
   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
   MPI_Comm_size(MPI_COMM_WORLD, &size);
   MPI_Get_processor_name(name, &length);

   printf("%s: hello world from process %d of %d\n", name, rank, size);

   MPI_Finalize();

   exit(0);
}
gustav@bh1 $
You have already seen how this program runs in section 5.1.5.

All MPI programs must begin with MPI_Init(&argc, &argv) and end with MPI_Finalize(). It is not an error to insert non-MPI program statements in front of MPI_Init or after MPI_Finalize.

Function MPI_Init initializes  the MPI system. MPI processes are spawned and ranked. Communication channels get established. The default communicator, MPI_COMM_WORLD, is created. This function must be called before any other MPI function. But you must not call it again. Subsequent calls to MPI_Init will produce an error.

At the end of an MPI program you always must call MPI_Finalize. This function  cleans up the MPI machine. Once it has been called no other MPI function will work. Not to call MPI_Finalize and exit will result in exit error. But this function alone is not equivalent to the UNIX function exit. It only exits the MPI machine, not the UNIX process.

Between MPI_Init and MPI_Finalize you have an MPI program. Let's see what's there in our hellow2 example.

The first call to MPI_Comm_rank informs every process that participates in the MPI communicator about its rank  number. This is how processes can find out who they are and what their role is within the pool. This function takes two arguments. The first one is the MPI  communicator, and here we use the default communicator, MPI_COMM_WORLD, which the MPI engine creates at the very beginning. All processes invoked by the program belong to this communicator. The second argument is a pointer to integer. On return, each process is going to get its rank number written on this location.

Why should the processes bother about their rank numbers? This is so that they can then differentiate their actions depending on this number. Otherwise they would all have to do exactly the same thing. In the case of this simple example program, they will write messages on standard output, and each MPI process will write a somewhat different message.

The next call to MPI_Comm_Size tells the participating processes about the total number  of processes in this communicator. This is also something they need to know. For example, if the program is to work on a very long array, the processes will have to know how many of them there are in the pool in order to work out for themselves which portion of the array they should work on. Function MPI_Comm_size takes the MPI communicator as the first argument and returns the size of the communicator in the location pointed to by the second argument.

Finally we call function MPI_Get_processor_name. We call this function only out of curiosity. It is seldom necessary for the processes themselves to know what physical processors they run on. But this function is there in the MPI standard  and it was meant to be used in process migration. Here we simply use it to show that the MPI processes indeed run on different CPUs. We could just as well have used the standard UNIX  function gethostname, but this would return the same name if the MPI program was to run on a large SMP, or a large single parallel machine like, say, Cray X1. On the other hand, function MPI_Get_processor_name will in this case return a specific number of the physical processor a given MPI process runs on.

After the MPI processes have collected all this information, each of them prints on standard output a message about

Now, how to make and install this program? Its Makefile looks as follows:

gustav@bh1 $ cat Makefile
#
# %Id: Makefile,v 1.1 2003/09/29 16:20:16 gustav Exp %
#
# %Log: Makefile,v %
# Revision 1.1  2003/09/29 16:20:16  gustav
# Initial revision
#
#
DESTDIR = /N/B/gustav/bin
MANDIR  = /N/B/gustav/man/man1
CC = cc
CFLAGS = -I/N/hpc/mpich2/include
LIBS = -L/N/hpc/mpich2/lib
LDFLAGS = -lmpich
TARGET = hellow2

all: $(TARGET)

$(TARGET): $(TARGET).o
        $(CC) -o $@ $(TARGET).o $(LIBS) $(LDFLAGS)

$(TARGET).o: $(TARGET).c
        $(CC) $(CFLAGS) -c $(TARGET).c

install: all $(TARGET).1
        [ -d $(DESTDIR) ] || mkdirhier $(DESTDIR)
        install $(TARGET) $(DESTDIR)
        [ -d $(MANDIR) ] || mkdirhier $(MANDIR)
        install $(TARGET).1 $(MANDIR)

clean:
        rm -f *.o $(TARGET)

clobber: clean
        rcsclean
gustav@bh1 $
The CFLAGS tell the compiler to look for MPI-2 definitions in /N/B/gustav/include. Similarly, we tell the loader that it should get objects from the library in /N/B/gustav/lib. Finally, the specific library that should be used in libmpich.a, which is what the switch -lmpich means.

Here's how we make the program:

gustav@bh1 $ make
co  RCS/Makefile,v Makefile
RCS/Makefile,v  -->  Makefile
revision 1.1
done
co  RCS/hellow2.c,v hellow2.c
RCS/hellow2.c,v  -->  hellow2.c
revision 1.1
done
cc -I/N/hpc/mpich2/include -c hellow2.c
cc -o hellow2 hellow2.o -L/N/hpc/mpich2/lib -lmpich
gustav@bh1 $
And now we install it:
gustav@bh1 $ make install
co  RCS/hellow2.1,v hellow2.1
RCS/hellow2.1,v  -->  hellow2.1
revision 1.1
done
[ -d /N/B/gustav/bin ] || mkdirhier /N/B/gustav/bin
install hellow2 /N/B/gustav/bin
[ -d /N/B/gustav/man/man1 ] || mkdirhier /N/B/gustav/man/man1
install hellow2.1 /N/B/gustav/man/man1
gustav@bh1 $
The manual page for the program describes how to run it under MPICH2:
HELLOW2(1)           I590 Programmer's Manual          HELLOW2(1)

NAME
       hellow2  -  for each MPI process print its rank number and
       name of the processor it runs on.

SYNOPSIS
       mpiexec -n <number-of-processes> hellow2

DESCRIPTION
       hellow2 prints the name of the processor, the process rank
       number  and the size of the communicator for each MPI pro-
       cess.

OPTIONS
       No hellow2 specific options

DIAGNOSTICS
       No hellow2 specific diagnostics

EXAMPLES
       $ mpdboot -n 8
       $ mpiexec -n 8 hellow2
       bc89: hello world from process 0 of 8
       bc31: hello world from process 2 of 8
       bc29: hello world from process 1 of 8
       bc33: hello world from process 3 of 8
       bc34: hello world from process 5 of 8
       bc30: hello world from process 4 of 8
       bc35: hello world from process 6 of 8
       bc32: hello world from process 7 of 8
       $ mpdallexit

AUTHOR
       The simplest MPI program possible. Author unknown.

I590/7462                  October 2003                HELLOW2(1)

You will find a script  in /N/hpc/mpich2/bin called mpicc. This script knows about the location of includes and libraries and simplifies the compilation process. And so, instead of calling libraries and includes explicitly, as I have done in the Makefile, you could simply compile this program as follows:

gustav@bh1 $ which mpicc
/N/hpc/mpich2/bin/mpicc
gustav@bh1 $ mpicc -o hellow2 hellow2.c
gustav@bh1 $


next up previous index
Next: Greetings, Master Up: Basic MPI Previous: Basic MPI
Zdzislaw Meglicki
2004-04-29