Friday, March 4, 2011

Parallel Programming - First Pass

With my time dominated by preparing to take PhD qualifying exams (i.e., quals), I have been even more slack than usual with regards to preparing regular posts.  Nonetheless, let's talk a little on parallel programming.  In one aspect, the parallel paradigm is the future of computer science, even if I remain highly skeptical about what the specifics of this computing will be.  But just because its usage in general computing may be occluded, the specific usefulness of parallel computing is not in doubt.  This post will serve as an overview of several concepts in parallel programming.

First to distinguish between concurrent and parallel execution.  Concurrent execution has the possibility or potential for executing simultaneously.  Parallel execution is when this potential is realized.  Concurrent execution is possible with a single core; however, parallel execution is not.

Synchronization is the main question when writing concurrent code.  Synchronization introduces a specific ordering to what was otherwise independent execution.  There are two common flavors: exclusion and notification.  Exclusion consists of mutexes, spinlocks, and other constructs that guarantee a single instance of concurrent execution performing a specific set of operations.  With notification, concurrent executions establish information with respect to each other, for example every instance has reached a specific point (e.g., barrier).

An ongoing quest with synchronization research is transactional memory (TM).  TM provides the ability to make a set of memory updates atomicly.  Processors provide the ability to make simple updates atomic (see Compiler Intrinsics), yet a series of updates requires the explicit exclusion guarantee provided by spinlocks, etc.  TM brings the exclusion to the memory address itself, rather than the abstract object protected by the spinlock, and allows an arbitrary set of accesses to be encapsulated in the atomic operation.  However, TM is not presently feasible.

Parallel patterns are formed based on the observation that parallel programs and algorithms can be classified into several distinct groups (i.e., patterns).  An assembly line is a parallel operation and fits the "pipelined" pattern.  By the programmer recognizing the pattern, certain common errors can be avoided.  With the pipeline, the programmer recognizes that the data is to be passed through discreet stages.

Well, that's my prelude to what will likely be many more posts on parallel programming.

No comments: