next up previous index
Next: Basic MPI Up: Running MPI on the Previous: Running MPI on the

Exercises

1
You should try all this out from your account. Try asking for a larger number of nodes. Remember to change both the PBS directive -l nodes= and NODES= in the shell script.
2
There is another MPI program in /N/B/gustav/bin, called cpi. Copy this program to your $HOME/bin and modify the script above to execute cpi instead of hellow2. What does this program do?
3
You can startup, manipulate and close the MPICH2 engine interactively from the head node too, but only if you're quick. Otherwise the skulker that runs on the computational nodes will kill your MPD processes there. Try the following from your account:
gustav@bh1 $ cd
gustav@bh1 $ pwd
/N/B/gustav
gustav@bh1 $ ls -l mpd.hosts
-rw-r--r--    1 gustav   ucs            77 Oct  1 16:48 mpd.hosts
gustav@bh1 $ cat mpd.hosts
bc55-myri0
bc54-myri0
bc53-myri0
bc49-myri0
bc48-myri0
bc47-myri0
bc46-myri0
gustav@bh1 $ mpdboot
gustav@bh1 $
Observe that we have started mpdboot without any options. By default mpdboot will check if there is a file called mpd.hosts in your working directory. If such a file exists mpdboot will read host names from it and it will try to spawn mpds on those hosts.
gustav@bh1 $ mpdtrace -l
bh1_49212
bc47_34173
bc48_33601
bc46_34106
bc53_34145
bc49_34746
bc55_34766
bc54_35437
gustav@bh1 $ mpdringtest 1000
time for 1000 loops = 1.432543993 seconds
gustav@bh1 $ mpdrun -l -n 8 hostname
0: bh1
2: bc48
5: bc49
1: bc47
6: bc55
3: bc46
7: bc54
4: bc53
gustav@bh1 $ mpiexec -n 8 hellow2
bh1: hello world from process 0 of 8
bc48: hello world from process 2 of 8
bc47: hello world from process 1 of 8
bc46: hello world from process 3 of 8
bc49: hello world from process 5 of 8
bc53: hello world from process 4 of 8
bc55: hello world from process 6 of 8
bc54: hello world from process 7 of 8
gustav@bh1 $ mpdallexit
gustav@bh1 $

Please contact me if any of the above doesn't work for you.



Zdzislaw Meglicki
2004-04-29