Saturday, March 26, 2011

Functional vs Imperative Programming

I was recently provided a link to another "programming" blog.  The blog, Existential Type, is written by a professor from my undergrad institution, Carnegie Mellon.  In it, he appears to be chronicling his efforts of teaching functional programming to freshmen computer science students.  I have been deeply engrossed in reading the posts thus far and they have revived knowledge of past debates in language choice, etc.  You may look forward to several future posts delving deeper into this issue.  But today is Saturday and I have more research to do.

Monday, March 7, 2011

A Rare Error

In the course of my research, I encountered the following error and was unable to find any reference to solutions.  However, this error is likely confined to programmers writing their own types using reflection or perhaps a buggy compiler.

PEVerify - "call to .ctor only allowed to initialize this pointer ..."

This error signifies that a constructor (.ctor) should call the constructor for itself or for the type it is extending; however, it is directly calling a different routine.  The following code is a rough sketch of the error condition.

class baseT  // implicitly derives from Object
{
    baseT() { }
}

class derT : baseT
{
    derT()
    {
        Object();  // calling a different constructor than base class
    }
}

For my work, changing the call from Object() to baseT() fixed the error.

Friday, March 4, 2011

Parallel Programming - First Pass

With my time dominated by preparing to take PhD qualifying exams (i.e., quals), I have been even more slack than usual with regards to preparing regular posts.  Nonetheless, let's talk a little on parallel programming.  In one aspect, the parallel paradigm is the future of computer science, even if I remain highly skeptical about what the specifics of this computing will be.  But just because its usage in general computing may be occluded, the specific usefulness of parallel computing is not in doubt.  This post will serve as an overview of several concepts in parallel programming.

First to distinguish between concurrent and parallel execution.  Concurrent execution has the possibility or potential for executing simultaneously.  Parallel execution is when this potential is realized.  Concurrent execution is possible with a single core; however, parallel execution is not.

Synchronization is the main question when writing concurrent code.  Synchronization introduces a specific ordering to what was otherwise independent execution.  There are two common flavors: exclusion and notification.  Exclusion consists of mutexes, spinlocks, and other constructs that guarantee a single instance of concurrent execution performing a specific set of operations.  With notification, concurrent executions establish information with respect to each other, for example every instance has reached a specific point (e.g., barrier).

An ongoing quest with synchronization research is transactional memory (TM).  TM provides the ability to make a set of memory updates atomicly.  Processors provide the ability to make simple updates atomic (see Compiler Intrinsics), yet a series of updates requires the explicit exclusion guarantee provided by spinlocks, etc.  TM brings the exclusion to the memory address itself, rather than the abstract object protected by the spinlock, and allows an arbitrary set of accesses to be encapsulated in the atomic operation.  However, TM is not presently feasible.

Parallel patterns are formed based on the observation that parallel programs and algorithms can be classified into several distinct groups (i.e., patterns).  An assembly line is a parallel operation and fits the "pipelined" pattern.  By the programmer recognizing the pattern, certain common errors can be avoided.  With the pipeline, the programmer recognizes that the data is to be passed through discreet stages.

Well, that's my prelude to what will likely be many more posts on parallel programming.