Thursday, December 29, 2016

Book Review: The Art of Readable Code

I really enjoyed reading The Art of Readable Code.  I always enjoy reading books on style, both English writing as well as source code.  The earlier chapters were a greater highlight to me, as they focused on the basics of style and particularly were demonstrated through languages with which I am familiar.  Some of the later chapters instead were examples particular to languages that I do not use, such as illustrating style pitfalls with JavaScript.  The book also had value in showing several approaches and continuing to refactor the code to better meet style and readability.

While the topic is style, this is really more about fundamentally good practices, which may be implemented in one of several ways (e.g., where to put braces, camel case, etc) that is termed style.  Between this and the examples within the text, I want to start requiring it of students.  I want them to read and see why style actually matters.  Or maybe we will just have to wait until they experience why it matters and suffer for it.

Thursday, December 8, 2016

Practical TSX

I previously speculated about how Intel's TSX is implemented; however, I did not have access to any machines supporting TSX until this year.  I still have not done much testing personally, but I did direct two students who explored and measured this support.  As a reminder, TSX is an implementation of hardware transactional memory, which can simplify concurrency designs by avoiding the need for locks.  Being a hardware implementation, it has certain fixed costs and constraints.

Mario Dehesa-Azuara and Nick Stanley completed a 6 week project in the spring and the summary below is taken from their final project report.  Also, being students, their report may not be available indefinitely, so this link may be broken at some future date.
First, they reviewed the best practices for writing TSX-enabled code.  Particularly, there is a problem in that the TSX path and the fallback path (the fallback path is required to ensure that the code can make progress even with aborts) must be mutually exclusive.  This can require putting in additional operations versus a pure, transactional approach.

Second, they measured the cost of concurrent data structure updates.  Their work noted that writing a transactional implementation was significantly easier than a fine-grained locking approach.  However, their measurements revealed some counter-intuitive results.  For example, an AVL tree is a self-balancing data structure.  The self-balanced nature is a benefit in that fewer memory accesses should be required.  Yet, the rotations required to maintain this condition actually increased the set of locations accessed and therefore resulted in a high rate of aborts.

To understand this, we must turn to the actual implementation.  We know that TSX can only track a limited number of memory locations (at most, the size of the L1 data cache).  As soon as any transactional memory location (i.e., cache line) cannot be stored in the L1, the transaction must abort.  Thus limiting the size of the read and write sets of the transaction are vital for completing the transactions.  In Mario's and Nick's experiments, they observed that after 5 million insertions into an AVL tree, transactions were at a 50% failure rate (regardless of the tested number of threads).  In contrast, a treap with its probabilistic balancing, has relatively constant failure rates that depend on the number of threads (and not the total insertions).

Third, using TSX has an inherent cost that is significantly higher than other atomic operations.  It still remains the advice that simple atomic updates should utilize the appropriate instructions.  What if you need to perform several of these operations?  Again, we turn to the final report.  The measurements show that simple transactional operations on consecutive memory locations will be faster than the equivalent atomic operations on those locations when you access at least 6 locations per "transaction".  Nonetheless, if the program must obey another constraint, such as update all or none of the elements, then locks or transactions would be required.

It is important to remember that a major benefit of transactional memory is in design and implementation effort, not in the actual runtime of the program.

Friday, October 14, 2016

Conference Attendance Teaching and Learning Summit 2016 - Keynote

Critical Thinking: Why is it so hard to teach? - Dr Daniel T. Willingham

Critical thinking is intertwined with content knowledge.  We saw a sequence of four examples (If vowel then even number, if alcohol then 21, if gin, then haddock, if entering then cholera vaccine), for each example, there is a claim about a set of cards: If X then Y.  Given four cards, verify the claim.  If the problems were formulated based on permissions, then the success rate was high.  Each problem is technically, P -> Q, but having just completed a semester of logic has no impact on results.

Scientific reasoning is taught in two pieces scientific concepts and scientific method.  So consider designing a learning experiment.  The group is split into intervention and control.  How do you know that the random sample is valid?  Background knowledge is required to determine the appropriateness of the split.

Critical thinking occurs from learning at the deep level.  The surface story is say, "tumors and rays".  The deep question is whether it is modus pones, Netwon's third law, etc?  However, memory is focused on the surface facts.  Recall is based on those components.

Why not teach the deep structure immediately?  Abstractions are hard to understand.  Instead, learners have to see lots of surface structures all overlaying the same deep structure.

Sometimes failures in critical thinking are actually failures in basic knowledge.  Furthermore, there are also innate biases, such as words refer to objects and attributes, and the world is full of agents and purposes.

Takeaway 1: Most of critical thinking is domain-specific.
Takeaway 2: In each domain, faculty should identify what they consider the important critical thinking skills.
Takeaway 3: Select content with an eye toward teaching these skills.  Teach the critical thinking in the context of the content.
Takeaway 4: Critical thinking is a curricular issue.  These skills require more than 1 semester to acquire.
Takeaway 5: Certain foundational concepts may run counter to the mind's biases.  Students have functional knowledge that has worked so far.  For example, "equals sign means put answer here".

Q. Translating domain skills in interdisciplinary work?
A. Don't know.  Probably needing to know enough of the skills in the home domain to be able explore the other domain.

Q. If critical thinking is domain specific, how specific are domains?
A. Domains are nested.  Proper application requires domain knowledge.  Moving from cognitive psychology to social leaves [the speaker] less skilled, but still better than average.  Into clinical psychology, they have a common basis, but limited ability to apply.

Tuesday, September 6, 2016

When is it signaled?

Signals are a mechanism for notifying a process of a simple event.  Most programmers and programs can ignore them, as the default behaviors are reasonable and taking steps to handle them would greatly increase the program complexity.  But when teaching future computer scientists, we want the programmers to know about these mechanisms and therefore properly understand the functioning of the system.

In working with signals, the developing programmers are often exposed to their first dose of concurrency.  The idea that execution can be happening in simultaneous, arbitrary orders except when action is taken by the program.  With signals, a program can do several things:

  • Provide and install a handler for one or more signals
  • Block the receipt of one or more signals
  • Send a signal to one or more processes

We are going to consider the problem of sending and receiving a signal.  After a signal is sent, when does the other process receive it?  Students make the assumption that when the sending function (i.e., kill) has returned, the signal has been sent *and received*.  However, I have found no text that explicitly guarantees this condition.  Instead, I prepared a simple program (source at the end of this post) to test this condition.

As signals are communicated through the operating system, we want a different mechanism for measuring simultaneity, in this case shared memory.  The experiment program will create and set up a small space of shared memory between two processes.  Next, it will wait until both programs are running in a known state (i.e., barrier).  Then one (the parent) will signal the other (the child), while measuring how long it takes to send, as well as receive the signal.  Finally, run this experiment a million times.

On my current Ubuntu box with Skylake processors, the experimental measurements show that 80% of the time, the child lasts for an average of 40 cycles after kill returns.  The maximum time is almost 1200 cycles.  Assuming that each core's clock has a smaller skew, this effectively means that the child can continue to run even after the function has returned.

Source code follows: (compiled with gcc version 4.8.3, with gcc -O3 -lrt)
#include <sys/mman.h>
#include <sys/stat.h> /* For mode constants */
#include <fcntl.h> /* For O_* constants */
#include <stdio.h>
#include <signal.h>
#include <unistd.h>
#include <stdint.h>

#include "rdtsc.h"

static const int ITER = 1000 * 1000;

int main(int argc, char** argv)
    int shm_fd = shm_open("squig", O_CREAT | O_RDWR, (S_IREAD | S_IWRITE));
    int count = 0, i;
    uint64_t zero = 0, min = ~0x0, max = 0, sum = 0;
    uint64_t minN = ~0x0, maxN = 0, sumN = 0;
    write(shm_fd, &zero, 8);  // Give the shared file "space"
    void* msh = mmap(NULL, 4 * 1024, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0);
    volatile uint64_t* tsc = msh;
    for (i = 0; i < ITER; i++)
        *tsc = 0;
        pid_t chd = fork();
        if (chd == 0)
            // Give the compiler something to think about
            while(zero < ~0x0) { *tsc = rdtsc(); zero++;}
            // Wait for the child
            while (*tsc == 0) {}
            uint64_t st = *tsc; // When is it for the child?
            kill(chd, SIGINT); // Send SIGINT to child
            uint64_t k = rdtsc(); // Returned from kill
            wait(NULL); // Reap
            uint64_t delta = 0;
            uint64_t en = *tsc; // When did the child end?
            // K >, implies that kill returned after the child terminated
            if (k > en)
                count ++;
                delta = k - en;
                if (delta < minN) minN = delta;
                if (delta > maxN) maxN = delta;
                sumN += delta;
                delta = en - k;
                if (delta < min) min = delta;
                if (delta > max) max = delta;
                sum += delta;
    printf("Min: %lx, Max: %lx, Avg: %lx\n", min, max, (sum / (ITER - count)));
    printf("Min: %lx, Max: %lx, Avg: %lx\n", minN, maxN, (sumN / (count)));
    printf("Percent Parent After: %lf\n", (count / (double)ITER));
    return 0;

Thursday, June 30, 2016

Computer Scientists and Computers Usage

This post is built on a discussion I had with Dr. Thomas Benson and the recent blog post by Professor Janet Davis, I've got a POSSE.  Less directly, Professor Mark Guzdial has written several recent blog posts about parents wanting CS in high school, most recently this (where the point is raised that not everyone knows what this is).

What is it to be a computer scientist?  Is it just software development?  When I was first looking at colleges and I knew I wanted to program, I saw three majors that seemed appropriate: Computer Science, computer programming, and game development.  And from my uninformed perspective, Computer Science (CS) initially seemed the least applicable.  This was before Wikipedia, so how would one know what CS is?

Michael Hewner's PhD defense was a study of how undergrads perceive the field, as they learn and progress through the curriculum, do their views change?  I know my views changed; for example, functional programming was a complete unknown before matriculating.  By the completion of my undergraduate degree, I perceived that Computer Science has three pillars: systems, theory, and application.  I still view CS primarily from a programmer's lens and not a big tent view.

Indirectly, the Economist wrote about Programming Boot Camps where college grads go back to get a training in programming; however, "Critics also argue that no crash course can compare with a computer-science degree. They contend that three months’ study of algorithms and data structures is barely enough to get an entry-level job."  I both agree and disagree.  There is a further transformation of the economy coming whereby workers will .  One PL (programming language) researcher recently estimated that Excel is the most common programming language.  Can the worker using Excel do more or do it faster with VB or python or ...?

Are there nuances to the study of Computer Science in which students can focus further?  Clearly there are with the mere presence of electives.  Georgia Tech even groups similar electives into "threads".  Besides just electives, under a programmer-centric view of CS, software development involves more than just the strict programming aspect.  Even if CS is the "programming", a written program requires software engineering to maintain and manage its development and designers to prepare the UI (etc).

Thus CS might evolve more toward having a pre-CS major, and after the first two(?) years, students can then declare as Computer Science or Software Engineering or Human Computer Interaction or ....  So this is both less and more than Georgia Tech's threads, but an approach with similarities.  In common, this development of CS major(s) balances a concern of whether students have the fundamentals to succeed in areas related (i.e., programming, software development, and design) to their actual major, while allowing some greater depth and specialization.

This step is a maturing of the discipline.  No long just one major, but a school of majors.  CMU offers BS in CS, along with three other interdisciplinary majors.  Georgia Tech offers CS (with 8-choose-2 threads) and Computational Media.  Rose-Hulman offers both CS and Software Engineering.  (I am only citing the programs that I know about, not an exhaustive list).

Friday, May 6, 2016

Tail Duplication with LLVM

Let me start by saying that tail duplication exists in LLVM, but only as a backend, machine code optimization.  I am going to detail several engineering points that were required to support this operation strictly in the LLVM intermediate representation (IR).

To demonstrate tail duplication, I will take an extremely simple example of returning whether a number is even.  (This is just demonstration, an optimizing compiler can reduce this example to two inlined x86 instructions.)  In the following code, there are four basic blocks: A, B, C, and D.  Block A initializes the value and executes the conditional instruction with control flow to both blocks B and C.  Blocks B and C update the value and then fall through to block D, which returns the value updated in either B or C.  This control flow is the classic diamond pattern, from the picture it makes in the control flow graph (CFG).

bool isEven(int x)
    bool returnValue = false;    // A
    if (x % 2 == 0) 
        returnValue = true;      // B
        returnValue = false;     // C
    return returnValue;          // D

The idea is to duplicate a tail block and then merge each duplicate into its predecessor.  This (often) increases the code size, but provides a larger block of instructions for optimization and scheduling and is a vital step in superblock scheduling.  In the above example, tail duplication would duplicate block D into D' and then merge D with B and D' with C.

The second thing that you should know is SSA form.  This is a way of representing the instructions, such that every value is from a single static assignment (SSA).  If every variable is only assigned to once, how do loops work?  How did the above example work?  A special assignment is added, called a PHI node.  A PHI node selects from multiple values depending on which edge in CFG was followed.  SSA provides an ease of understanding where a value is created and every place that it is used.

Tail duplication follows two basic steps that LLVM provides support to accomplish:
  1. Clone the basic block
  2. Merge cloned blocks with the predecessors

Let's start by calling CloneBasicBlock.  The first problem is that the cloned block is a perfect clone, in that it uses all of the old instructions' values.  All of them.  It is like you move across the country and discover that your lamp is still plugged in at your old place, somehow.  Tackling this problem requires the code to fix all of these use-def chains, so that the new uses depend on the new definitions in the cloned block.  Another function CloneFunctionInto has a loop that fixes all of the instructions in the cloned function, which will serve here too.

And then the second problem with cloning basic blocks is that the program has introduced new definitions into the program, which may now vie with the existing definitions.  There is a simple solution to merging these values, PHI nodes.  Of course, these PHI nodes have nothing to do with the old PHI nodes already present.  I spent a long time thinking about how I would update the existing PHIs, until I realized that they were all going to be cloned and merged, so I would new a completely new set as the one definition had become many and the many uses had become one.

Next each basic block is to be merged with its one predecessor.  LLVM provides MergeBlockIntoPredecessor; however, the IR kept failing in odd ways after using this function.  What I discovered is that the basic block was no longer well formed.  The cloned block has only one predecessor, but its PHINodes still have multiple predecessors.  Therefore, the merge step would take the first value for each PHINode (which would be correct if the block was well formed), but in my case I was in the process of reforming the IR.  Therefore, I had to fix the PHINodes to only have one incoming value.

I thought what I would do is iterate through the incoming paths into each PHI instruction and delete any that do not match the single predecessor.  Unfortunately, when the incoming value is deleted, the iterators were invalidated (not an uncommon problem when iterating).  So in the end, the tail duplication I was writing had to do what the MergeBlockIntoPredecessor already did, select one value for the PHINode and update the uses accordingly.

With that development complete, now I can move on to the interesting question of whether there are cases when doing tail duplication would be a good idea and provide some benefit to my instrumentation.

Monday, March 21, 2016

PhD Defense - Simple DRAM and Virtual Memory Abstractions to Enable Highly Efficient Memory Subsystems

Vivek Seshadri gave his PhD defense today covering: how different memory resources have different granularities of access, and therefore need different management techniques.  These techniques come out of understanding what hardware could do, without necessarily identifying common features in existing applications that would require / benefit from these techniques.

Page Overlays / Overlay-on-Write: provide the ability to assign physical addresses at sub-page granularities (call them overlays).  This reduces the cost of sparse copy-on-writes.  In effect, assign a fresh sub-page unit and copy to that location.  On every access, check the overlay table in parallel to determine whether to use the normal translation or go to the overlay location.

Gather-Scatter DRAM: provide support for only requesting a subset of cachelines.  First, shuffle the data in a cacheline so that the same subset of multiple cache lines will map to different chips in DRAM.  Second, introduce additional logic (just a few gates) in DRAM that will compute a modified address, where the default pattern (stride 1) is the normal, un-modified access.  

RowClone + BuddyDRAM: can DRAM speedup memcpy (and other bulk memory operations)?  First, by opening one row after another, the bitline will take the initial value and then write it into another row.  More complex is opening multiple rows simultaneously, which results in bit-wise operations across the three rows: final = C (A | B) | ~C (A & B).  By controlling C, bulk bitwise operations are possible.  Using this technique, the system can exceed the memory bandwidth for these operations.

DirtyBlock Index: the problem is that if the source is dirty, then it needs to be written back before the previous techniques can be used.  DBI provides a faster lookup mechanism to determine if / where are any dirty block lines.

These techniques are interesting, but as the candidate noted, they are in effect solutions in search of a problem.  And with DRAM being commodity hardware, it is difficult to envision these techniques being adopted without further work.