The Queen's University of Belfast

Parallel Computer Centre
[Next] [Previous] [Top]
Parallel Processing?
Parallel Processing?
- Traditional computers solve a problem by sequentially obeying a set of instructions
- Parallel Computers execute more than one instruction simultaneously
- low level

- Statement-level
Forall i=1,1000 a(i) = b(i) * c(i)
Forall i=1,1000 evaluate(i)
- high level - queries to a database running on different processors
Theoretical Parallelisation
- Programs are either
- inherently sequential
- potentially parallel
- Reduce execution time by
- using a faster processor
- move to parallel architecture
Granularity
- Granularity refers to the number of small sections (or grains) which a program (or data) is broken into
- Does maximum parallelization mean maximum speed-up ie. if n processors are available should the program be spilt into n tasks?
- effort to decompose problem
- communication overheads passing data between processes and synchronising processes - communication is disproportionately slow compared to computation (usually at least an order of magnitude)
- Trade-off grain-size against efficiency and execution time. Reducing grain-size may reduce execution-time but be less efficient.
Load Balancing
- Different sized tasks
- One task per processor, speed of execution is limited by the longest task
- Best efficiency is obtained using evenly sized tasks
- Programmer bears much of the responsiblitiy for load balancing but tools are available to perform automatic load balancing
- Multiple tasks may be placed on each processor in an attempt to keep the processors busy

Message Passing
- Message Passing is one of the most common forms of transmitting data from one processor to another
- Characterised by
- Multiple programs usually on different processors
- Processes exchange messages to synchronise activities and to pass data
- Each process is identified by a unique tag
- A message is sent to a particular process or broadcast to a group of processes
- The source and destination addresses are included in the message
- Communication pattern is determined by the application programmer
A quote from an expert
"Message Passing is the assembly language of the 1990's"
Joel Williamson, CONVEX Computer Corp.
Race Conditions
Non-determinancy
- Two parallel processes may "race" to supply data to a third process

- The outcome is unpredictable unless the programmer includes a mechanism for handling such circumstances. One technique would be to tag each message with an identification code.
Deadlock
- Program locks up due to sequencing problems
- Consider the situation below where process A is waiting for data from process B which is waiting for data from C which is waiting for data from D, which is waiting for data from A

- In the example above data was intended to move in a cyclic manner form process to process, however if the procedure to receive data waits until the data arrives this blocking action will cause all the processes to wait for data and no process will send data
- This is called deadlock
- Solution
- Use a non-blocking form of receive
- Always pair a receive with a send
- Use a master/controller process
- One process sends before receiving
- .........
[Next] [Previous] [Top]
All documents are the responsibility of, and copyright, © their authors and do not represent the views of The Parallel Computer Centre, nor of The Queen's University of Belfast.
Maintained by Alan Rea, email A.Rea@qub.ac.uk
Generated with CERN WebMaker