Tuesday, July 29, 2008

Challenges in Parallel Computing

This article is a draft, and likely to be updated a lot soon.

Parallel Programming is not constrained to just the problem of selection of whether to code using threads, message passing or some other tool. Having said that, I would tend to apply the 80-20 rule to deduce that most junior programmers and most high level applications would need to consider only this issue. But in general, all the members of the software team must participate in parallelization and must consider the overall picture containing a plethora of issues:
  1. Understanding the hardware: An understanding of the parallel computer architecture is necessary for efficient mapping and distribution of computational tasks. A simplified classification of parallel architectures is UMA/NUMA and distributed systems. A typical application may have to run on a combination of these architectures.
  2. Mapping and distribution on to the hardware: Mapping and distribution of both computational tasks on processors and of data onto memory elements must be considered. The whole application must be divided into components and subcomponents and then these components and subcomponents distributed on the hardware. The distribution may be static or dynamic.
  3. Concurrency Implementation using Threading and Shared Memory/Message Passing/something else: Different kinds of concurrency implementation may be useful for different applications and different levels of software and hardware. A small fine grained software component running within a single die may be implemented using threading and shared memory for high performance, while message passing may be used in cases performance is not an issue or the computations are run on physically distributed computers.
  4. Infrastructure level: A library of concurrent data structures having a well defined interface. This library obviously has to be based on the kind of concurrency implementation.
  5. ... still to do (I am sure there are more things)
In summary, I consider the following as important lessons while parallelization:
  • This is an important issue since it is a new way of programming and the whole team and hierarchy must be concerned.
  • There is no kill-all-birds gun. All parallelization tools and methods are different shades, and useful in different situations.
  • It would be useful to simplify the implementation as much as possible. For example simpler to use tools (simpler to debug, implement and reason about) must be preferred in case performance is not important.