next up previous index
Next: Mixing Parallel Programming Paradigms Up: Parallel Programming Previous: Message Passing Paradigm

Shared Memory Paradigm

The shared memory  paradigm is really restricted to shared memory systems, SMPs , although it is possible to simulate it on distributed memory systems too, albeit not without some cost. The basic idea here is that you can have a normal UNIX program, which  spawns  threads. The threads are each given separate CPUs to run on, if such are available. They can communicate with each other by writing data to shared memory and then reading data from it.

Recall that when a traditional fork  is called under UNIX, it copies the whole process into separate memory location and the processes, in principle, don't share memory. But threads do.

Some operating systems, e.g., Dynix , used to let forked processes share memory too.

Communicating through shared memory can be very fast, but it can be also quite troublesome. To begin with, what if two processes both try to write on the same memory location at the same time? The result of such operation would be unpredictable. So the shared memory paradigm introduces a concept of a memory lock. A process that wants to write on a shared memory location or that wants to read from it, can lock it, so that no other process can access it for writing at the same time.

But this solution causes other problems, of which the possibility of a deadlock is most severe. A deadlock  can occur if process A locks a memory location, forgets to unlock it or perhaps just postpones to unlock it for some reason and goes to write on another memory location, which just has been locked in the same way by process B, which is currently waiting for the memory location on which A has just written to get unlocked before it will unlock the one A is waiting for. Both processes will end up waiting forever. This is a deadlock.

Deadlocks can be avoided if certain rules are followed, but they still happen nevertheless.

Message passing programming can be implemented very efficiently on top of shared memory. In this case chunks of shared memory are used to store messages. Message sending is implemented as writing to the chunk, and message reading is implemented as reading from it. If this is done right, deadlocks can be avoided.

Data parallel programming can be implemented on top of shared memory too, with multiple processors of an SMP taking care of separate chunks of arrays that reside in shared memory. So SMPs are very good for Fortran 90  programs.

Various utilities exist that simplify shared memory programming, so that you don't have to spawn threads explicitly. For example, OpenMP  lets programmers parallelize loop execution by inserting compiler directives in front of the loops. This helps programmers parallelize legacy applications that were written in sequential languages like C and Fortran 77.


next up previous index
Next: Mixing Parallel Programming Paradigms Up: Parallel Programming Previous: Message Passing Paradigm
Zdzislaw Meglicki
2004-04-29