next up previous index
Next: Exercise 1 Up: Message Passing Interface Previous: Exercise

MPI Analysis and Profiling Tools

Consider the following simple HPF program:

program jacobi

  implicit none

!hpf$ nosequence

  integer, parameter :: n = 40, iterations = 1000
  integer, dimension(n), parameter :: north_boundary = 1, &
       east_boundary = 40, west_boundary = 40, south_boundary = 70
  integer, dimension(n, n) :: field = 3
  logical, dimension(n, n) :: mask = .true.
  integer :: i

!hpf$ align mask(:,:) with field
!hpf$ distribute (*, block) :: field

  field(ubound(field, dim=1), :) = east_boundary
  mask(ubound(mask, dim=1), :) = .false.
  field(lbound(field, dim=1), :) = west_boundary
  mask(lbound(mask, dim=1), :) = .false.
  field(:, ubound(field, dim=2)) = north_boundary
  mask(:, ubound(mask, dim=2)) = .false.
  field(:, lbound(field, dim=2)) = south_boundary
  mask(:, lbound(mask, dim=2)) = .false.

  call print_matrix(field)

  do i = 1, iterations
     where (mask)
        field = (eoshift(field, 1, dim=1) + eoshift(field, -1, dim=1) &
             + eoshift(field, 1, dim=2) + eoshift(field, -1, dim=2)) * 0.25
     end where
  end do

  call print_matrix(field)

contains

  subroutine print_matrix(field)
    integer, dimension(:,:) :: field
    integer :: i
    write(*, '(1x)')
    do i = size(field, dim=1), 1, -1
       write(*, '(1x, 40i3)') field(:,i)
    end do
  end subroutine print_matrix

end program jacobi
This is the simple Jacobi iteration program we have worked on in P573. I have changed the size of the matrix to $40\times40$.

Compile this program on the SP as follows:

gustav@sp20:../jacobi 17:14:24 !520 $ xlhpf90 -g -o jacobi jacobi.f
** jacobi   === End of Compilation 1 ===
1501-510  Compilation successful for file jacobi.f.
gustav@sp20:../jacobi 17:14:31 !521 $
Observe the presence of the -g switch, which adds debugging information to the binary.

We will now run this program with tracing turned on. There are two ways to do that. One is to define

$ export MP_TRACELEVEL=9
in your environment and then run the program under poe. The other way is to invoke poe with the -tracelevel 9 option. If you wish to run the program under the LoadLeveler submit the following LoadLeveler job description file:
gustav@sp20:../jacobi 17:14:31 !521 $ cat jacobi.ll
# @ job_type = parallel
# @ environment = COPY_ALL; MP_EUILIB=us; MP_INFOLEVEL=6; MP_TRACELEVEL=9
# @ requirements = (Adapter == "hps_user")
# @ min_processors = 4
# @ max_processors = 8
# @ output = jacobi.out
# @ error = jacobi.err
# @ executable = /usr/bin/poe
# @ arguments = jacobi
# @ notification = always
# @ class = test
# @ queue
gustav@sp20:../jacobi 17:18:17 !522 $ llsubmit jacobi.ll
submit: The job "sp20.188" has been submitted.
gustav@sp20:../jacobi 17:18:29 !523 $
Observe that this time I have requested that the communication should take place through the user space rather than through TCP/IP. On P2SC nodes this is a more efficient way of transmitting messages between processors. On the so called Silver Nodes, and on the new Power-3 nodes, this is no longer the case, apparently.

When the job completes, apart from the usual jacobi.err and jacobi.out files, there should be a new file in your working directory, called jacobi.trc. This is the trace file.

We can now look at it with the visualisation tool, vt.

gustav@sp20:../jacobi 17:21:58 !528 $ vt -tracefile jacobi.trc &
[1] 25230
gustav@sp20:../jacobi 17:23:34 !529 $
When vt comes up it will flash a window saying:
      Postprocessing, please wait...
This may take a while, because the trace file in this case is nearly 17MB. When vt has finished post processing the trace file, it will say so. At this stage we can begin looking at the program.

vt will bring up two main windows. The first one contains a nowadays familiarly looking pushbuttons for play, step, loop, reset, and stop. There is also a speed scrollbar on the right and a Tracefile Time Control at the bottom.

The second window contains multiple push buttons for selecting various views. The view that you want to select initially is Communication/Program. Press this button. The window that comes up will be initially black. Now press the play button and watch the action unfold.

You will see a number of ``thermometer'' displays. Initially these will scroll slowly, without much action at all, until eventually you should see stright lines appear, which connect various points on those ``thermometers''.

Every ``thermometer'' represents an MPI process. There are fields of various colours on those ``thermometers''. The colours represent various activities. You can point at any particular field and click the left mouse button to see what a given processor was doing at this stage. If you click on the gray field a small window will pop up telling you:

processor 4:0: No Communications
If you click on the blue field, the pop up window will say:
processor 3:0: MPI Wait
And if you click on the pink field, the pop up window will say:
processor 5:0: MPI Immediate Receive
Looking at some parts of the parallel program, as it unfolds, you can see that there is a lot of MPI Wait in it. The white lines connecting the ``thermometers'' show which processes have been connected by message pipes at this stage.

If you right-click on the Interprocessor Communication window you will get another menu. There you can select various options, such as Search, Parameters, and Configuration. Go to Parameters. Here you will see what it is that the colours in the ``thermometers'' correspond to. The default is Communication. But if you left-click on the keySpectrum window, you can change that to Random, Fade, Monochrome, Discrete, Continuous, and CPU Load.

Now go back to the VT View Selector window and select another view, for example, Message Status Matrix. Rewind the trace, and replay it again.

The Message Status Matrix is a matrix, $n\times n$, where n is the number of processes. When there is a communication from, say, process 3 to process 5, then a square that corresponds to position (3,5) in the matrix lights up - in my case, actually, it goes black. You can change that, by right-clicking with your mouse on the matrix window, and selecting Parameters in the little pop-up menu.

Observe that for this program messages appear initially over the whole body of the matrix, but eventually they all fill just the last column of the matrix. When you get to this stage, stop the time scroll, and then left-click on any of the lit-up squares. You should see a message in a pop-up window similar to this one:

17:18:39.048969608
Messages sent from 1 to 7:1 
MessageLength=800
CummLength=800
The fact that it is only the last column that is filled means that all messages are gathered by process rank 7, which in this case must have been made responsible for coordinating the work of the Jacobi iterator.

Another push button in the VT View Selector window called Connectivity Graph displays a similar information. But this time processes are represented by dots placed on the perimeter of a circle. Arcs connecting the dots then appear, as messages begin to fly between processes. As processes perform various actions, e.g., No Communications, or MPI Blocking Send the dots that correspond to those processes change colours. You can stop the trace replay, at any stage, and left-click on any of the dots, to see what it has been up to at that instance.

You can also select Source Code in the VT View Selector panel. As the replay proceeds, code lines that are being executed right now are highlighted.



 
next up previous index
Next: Exercise 1 Up: Message Passing Interface Previous: Exercise
Zdzislaw Meglicki
2001-02-26