next up previous index
Next: Programming Examples Up: Introduction Previous: MPI Documentation and Literature

What is in MPI?

A lot! MPI is a very comprehensive library and MPI-2 has about all of it including some elements of fault-tolerance, process migration, etc.

The most fundamental functions in MPI, MPI_Send  and MPI_Recv , provide not only for buffered sends and receives, but also for typing  of transferred data. They also support the concept  of a communicator. A communicator is a group of processes within which the ranking , i.e., process differentiation, and communication take place. A single program may comprise several overlaping or completely separate communicators. This concept is crucial to implementations of parallel code libraries. Otherwise, there would always be a risk of library communications interfering with other communications within the parallel program.

MPI provides a very rich set of collective communication functions, virtual topologies, hooks for debugging and profiling, blocking and non-blocking sends and receives, and support for heterogeneous networks. MPI-2  adds numerous clarifications, specification of portable MPI process startup (previously every MPI implementation would use different ways of starting MPI programs), new data type manipulation functions and new predefined types, support for dynamic process creation and management, support for one-sided communications, i.e., for the ability to write data directly into other processes' memories, portable high-performance IO in the form of MPI-IO , and support for  C++ and  Fortran 90.

MPI is obviously a very large library. The listing below shows all MPI-1 functions. There are 128 of them. MPI-2 adds at least this many if not more.

MPI_ABORT MPI_ADDRESS MPI_ALLGATHER MPI_ALLGATHERV
MPI_ALLREDUCE MPI_ALLTOALL MPI_ALLTOALLV MPI_ATTR_DELETE
MPI_ATTR_GET MPI_ATTR_PUT MPI_BARRIER MPI_BCAST
MPI_BSEND MPI_BSEND_INIT MPI_BUFFER_ATTACH MPI_BUFFER_DETACH
MPI_CANCEL MPI_CARTDIM_GET MPI_CART_COORDS MPI_CART_CREATE
MPI_CART_GET MPI_CART_MAP MPI_CART_RANK MPI_CART_SHIFT
MPI_CART_SUB MPI_COMM_COMPARE MPI_COMM_CREATE MPI_COMM_DUP
MPI_COMM_FREE MPI_COMM_GROUP MPI_COMM_RANK MPI_COMM_REMOTE_GROUP
MPI_COMM_REMOTE_SIZE MPI_COMM_SIZE MPI_COMM_SPLIT MPI_COMM_TEST_INTER
MPI_DIMS_CREATE MPI_ERRHANDLER_CREATE MPI_ERRHANDLER_FREE MPI_ERRHANDLER_GET
MPI_ERRHANDLER_SET MPI_ERROR_CLASS MPI_ERROR_STRING MPI_FINALIZE
MPI_GATHER MPI_GATHERV MPI_GET_COUNT MPI_GET_ELEMENTS
MPI_GET_PROCESSOR_NAME MPI_GRAPHDIMS_GET MPI_GRAPH_CREATE MPI_GRAPH_GET
MPI_GRAPH_MAP MPI_GRAPH_NEIGHBORS MPI_GRAPH_NEIGHBORS_COUNT MPI_GROUP_COMPARE
MPI_GROUP_DIFFERENCE MPI_GROUP_EXCL MPI_GROUP_FREE MPI_GROUP_INCL
MPI_GROUP_INTERSECTION MPI_GROUP_RANGE_EXCL MPI_GROUP_RANGE_INCL MPI_GROUP_RANK
MPI_GROUP_SIZE MPI_GROUP_TRANSLATE_RANKS MPI_GROUP_UNION MPI_IBSEND
MPI_INIT MPI_INITIALIZED MPI_INTERCOMM_CREATE MPI_INTERCOMM_MERGE
MPI_IPROBE MPI_IRECV MPI_IRSEND MPI_ISEND
MPI_ISSEND MPI_KEYVAL_CREATE MPI_KEYVAL_FREE MPI_OP_CREATE
MPI_OP_FREE MPI_PACK MPI_PACK_SIZE MPI_PCONTROL
MPI_PROBE MPI_RECV MPI_RECV_INIT MPI_REDUCE
MPI_REDUCE_SCATTER MPI_REQUEST_FREE MPI_RSEND MPI_RSEND_INIT
MPI_SCAN MPI_SCATTER MPI_SCATTERV MPI_SEND
MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SEND_INIT MPI_SSEND
MPI_SSEND_INIT MPI_START MPI_STARTALL MPI_TEST
MPI_TESTALL MPI_TESTANY MPI_TESTSOME MPI_TEST_CANCELLED
MPI_TOPO_TEST MPI_TYPE_COMMIT MPI_TYPE_CONTIGUOUS MPI_TYPE_EXTENT
MPI_TYPE_FREE MPI_TYPE_HINDEXED MPI_TYPE_HVECTOR MPI_TYPE_INDEXED
MPI_TYPE_LB MPI_TYPE_SIZE MPI_TYPE_STRUCT MPI_TYPE_UB
MPI_TYPE_VECTOR MPI_UNPACK MPI_WAIT MPI_WAITALL
MPI_WAITANY MPI_WAITSOME MPI_WTICK MPI_WTIME
But you don't have to panic. In great many cases you can do all you need with just 6 fundamental MPI functions:
MPI_Init
Initialize  MPI processes;
MPI_Comm_size
Find  out about the number of the processes in the MPI communicator;
MPI_Comm_rank
What  is my rank number within the pool of MPI communicator processes?
MPI_Send
Send a message.
MPI_Recv
Receive a message.
MPI_Finalize
Close  down MPI processes and prepare for exit.
All the other functions, 128 minus 6, are auxiliary. They make programmer's life much, much easier by encapsulating frequently used parallel programming procedures in convenience functions.


next up previous index
Next: Programming Examples Up: Introduction Previous: MPI Documentation and Literature
Zdzislaw Meglicki
2004-04-29