Wednesday, October 16, 2019

Conference Attendance - MICRO 52 - Day 2/3

This is a rough writing of the notes from the other two keynotes.

Keynote Bill Dally on Domain-Specific Accelerators

Moore's Law is over.  Sequential performance is increasing at 3% per year.  And cost per transistor is steady or increasing.

Most of power is spent moving data around, so simple ISAs such as RISC are actually inefficient power-wise versus specialized operations.  With special data types and operations, the hardware can be designed so that something taking 10s to 100s of cycles is done in 1.  Memory bandwidth can bottleneck, as "bits are bits".

Genome matching, via Smith-Waterman algorithm, can be done in single cycle for many bases (10), while a CPU would be 35 ALU ops and 15 load/store.  And the specialized hardware is 3.1pJ (10% is memory) and the CPU is 81nJ.

Communication is expensive in power, so be small, be local.  5pJ/word for L1, 50pJ/word for LLC, and 640pJ/word for DRAM.  And most of this power is driving the wires.

Conventionally, sparse matrices need <1% set bits to be worth using due the overhead of pointers, etc.  However, special purpose hardware can overcome this overhead.

Tensor core performs D = AB + C, so how to execute this in an instruction.  For a GPU, 30pJ to fetch / decode / operand fetch the instruction.  So specialized instructions can then operate as efficiently as specialized hardware, but with that overhead.  On a GPU that power is ~20% overhead.

Keynote: An Invisible Woman: The Inside Story Behind the VLSI Microelectronic Computing Revolution in Silicon Valley

Conjecture: Almost all people are blind to innovations, especially ones by 'others' whom they did not expect to make innovations.  ('others' = 'almost all people')

Basically, we rarely notice any innovation, so they are ascribed to the perceived likely cause (c.f., the Matthew effect or the Matilda effect).  Credit for innovations is highly visible, and many awards are ascribed to people with certain reputations rather than the specific innovator.

Monday, October 14, 2019

Conference Attendance - MICRO 52 - Day 1

I am in Columbus Ohio for MICRO 52.  A third of the attendees drove from other "midwestern" universities, of which I am one. 

Keynote: Rejuvenating Computer Architecture Research with Open-Source Hardware

Moore's Law is irrelevant now, as the cost per transistor has held steady since the 28mm technology node.  The cost of any deployment depends on the development cost and only at very large scales, is the cost per transistor dominant.  Given that, how can we reduce the cost of hardware development.

Cambrian explosion of (RISC) ISAs in mid-1980s on with a great diversity of ISAs being created and competing.  Then the Intel Pentium came out, which combined the CISC ISA with a translation into the RISC micro ops.  This extinction event destroyed most of those ISAs.

Why does the instruction set architecture (ISA) matter?  It is the dominant interface in the system, defining the interaction between software and hardware.  But ISAs are currently proprietary, and tied to the fortunes of the company.  Many ISAs have come and gone.  And then each SoC (system on a chip) gets custom ISAs for each accelerator.

So there is now the RISC-V ISA that is open for use and development (which I wrote about here).  The RISC-V foundation was formed in 2015 to be the neutral guardian of the specification and formal model.  Based on this specification, there are both open-source and commercial implementations of the hardware as well as the software ecosystem.

ComputeDRAM: In-Memory Compute Using Off-the-Shelf DRAMsDRAM is designed based on commands being sent in a specific order with appropriate timings.  The oddity is that if specific commands and timings are used that violate the normal usage, then the DRAM module can perform certain operations, such as AND and OR using three specially prepared rows (source x2 and destination).

Hybrid Skiplist: Combining the Best of Near-Data-Processing and Lock-Free Algorithms
This is a student research competition work that I want to highlight.  The work is taking skip-lists, a multi-level linked list to support more efficient traversals, which has been implemented on both near-data processing (NDP) systems as well as lock-free.  The performance of the two implementations is comparable, but we should be able to do better.  The observation is that lock-free gains by having the long, frequently-accessed links in the cache, while NDP gets the data items close.  Therefore, let's combine the two approaches so the algorithm uses the lock-free approach on the long links, and leaves the rest in NDP.  A dynamic approach then adapts which nodes are in the long list and promotes them, while demoting less frequently accessed elements.

Applying Deep Learning to the Cache Replacement ProblemLet's apply machine learning to cache replacement.  Offline, a ML model can perform better than the best replacement schemes, but offline this requires lots of space, more than the cache itself.  Current algorithms (such as Hawkeye) use just the current PC, whereas the observation is that the machine learning model includes history, so perhaps history can have value.  Using this, they analyzed the history further to notice that this history information is not complete nor does it have to be ordered.  If it does not need to be ordered, then the history is a feature list (i.e., bitvector) and not a full list, so the history feature gives an index into a table of predictors for whether a line is cache friendly in usage.

NVBit: A Dynamic Binary Instrumentation Framework for NVIDIA GPUs
This is a release of a Pin-like tool, but for GPUs.  Using the framework, you can write specific instrumentation to be applied to CUDA kernels.  The framework does an analysis of the kernel to find the specific instrumentation points and then recompile / JIT the code to integrate the request types into the kernel without requiring the actual source code for the kernel.  Such types as the specific instructions executed as counts or traces.  And thereby build a simulator or error checker.

Saturday, September 28, 2019

Repost: Active Learning is Better even if Students Don't Like It

Active learning is a set of techniques that require the student to take an active role in their learning during lecture.   Research strongly supports that students will learn more when the lecture utilizes these techniques.  And I have measured this effect in my own courses.  However, this research shows that students like lectures that use these techniques less even though they are learning more.  And I have also informally measured this, such as students who say at the end of the first lecture, "If you are going to require me to participate in lecture, I will not return".  Unfortunately, the present educational model is based on the student evaluations (primarily measuring what students like) to evaluate the quality of instruction.  Therefore perversely, this aggregate model encourages suboptimal teaching and learning.

The paper recommends then that professors take time in the beginning of the semester to demonstrate the benefits and gain buy in from the students.  And then continue to do so.  Students want to learn, so they will support this pedagogy.  And many students will recognize the value with time, if they give it.

Thursday, September 12, 2019

Thesis Proposal: Theoretical Foundations for Modern Multiprocessor Hardware

Naama Ben-David gave her proposal this morning on Theoretical Foundations for Modern Multiprocessor Hardware.

Is there a theoretical foundation for why exponential backoff is a good design?  Exponential backoff is a practically developed algorithm that 0.

To develop such a foundation, we need to a model of time; however, requests are asynchronous and not according to a single time source.  To address this, model time with adversarial scheduling.  Thus when performing a request, there are three sources of delay:
  • self-delay: backoff, sleep, local computation
  • system-delay: interrupts, context switches
  • contention-delay: delay caused by contention
Given this model, the adversary can, to a limited degree, decide when requests that an entity's request have passed from self-delay into the system delay can then move to contention-delay and ultimately be completed.

In BBlelloch'17, this model was applied and the work measured for different approaches.
  • With no backoff, there is omega(n3) work.
  • Exp backoff reduces to theta(n2 log n) bound on work
  • The paper also proposes a new algorithm that has high probability of O(n2)
The second phase of work is developing simple and efficient algorithms for systems that have non-volatile memory (NVRAM).  With NVRAM, on a crash or system failure, the contents in memory persist across reboot (or other restore).  This permits the system to restore the running program(s) to a finer degree than happens from auto-saves or other current techniques.  However, systems also have caches, which are not persistent.  Caches are presently managed by hardware and make decisions as to when to write contents back to memory.  Algorithms must work with the caches to ensure that results are safely in memory at selected points of execution.  There are a variety of approaches for how to select these points.

The third phase of work is modeling RDMA (remote direct memory access) systems.  Can there be a model of the different parts of such a system: memory, NIC (network interface card), and CPU?  Then explore the contention as well as possible failures in the system.

One scheme is for every processes to also be able to send messages on behalf of its shared memory neighbors, so that even if a process fails, its ability to participate in algorithms, such as consensus, is still possible.

Being a proposal, ongoing work will work on instantiations of these algorithms to measure the practical performance.

Monday, April 8, 2019

Presentation: The Quest for Energy Proportionality in Mobile & Embedded Systems

This is a summary of the presentation on "The Quest for Energy Proportionality in Mobile & Embedded Systems" by Lin Zhong.

We want mobile and other systems to be energy efficient, and particularly use energy in proportion to the intensity of the required operation.  However, processor architectures only have limited regions where these are in proportion, given certain physical and engineering constraints on the design.  ARM's big.LITTLE gives the a greater range in efficiency by placing two similar cores onto the same chip; however, it is constrained by a need to ensure the cores remain cache coherent.

The recent TI SoC boards also contained another ARM core, running the Thumb ISA for energy efficiency.  This additional core was hidden behind a TI driver (originally to support MP3 playing), but was recently exposed, so allowing further design to utilize it as part of computation.  But this core is not cache coherent with the other, main core on the board.

So Linux was extended to be deployed onto both cores (compiled for the different ISAs), while maintaining the data structures, etc in the common, shared memory space.  Then the application can run and migrate between the cores, based on application hints as to the required intensity of operations.  With migration, one of the core domains is put to sleep and releases the memory to the other core.  This design avoids synchronization between the two domains, which simplifies the code and the concurrency demands are low in the mobile space.  And here was a rare demonstration of software-managed cache coherence.

Therefore, DVFS provides about a 4x change in power, then big.LITTLE has another 5x.  The hidden Thumb core supports an additional 10x reduction in power for those low intensity tasks, such as mobile sensing.  Thus together, this covers a significant part of the energy / computation space.

However, this does not cover the entire space of computation.  At the lowest space, there is still an energy intensive ADC component (analog digital conversion).  This component is the equivalent of tens of thousands of gates.  However, for many computations, they could be pushed into the analog space, which saves on power by computing a simpler result for digital consumption and that the computation can be performed on lower quality input (tolerating noise), which reduces the energy demand.

Saturday, March 23, 2019

Repost: Code Smells ... Is concurrency natural?

Writing parallel code is not considered easy, but it can be a natural approach to some problems for novices.  When a beginner wants something to happen twice concurrently, the reasonable thing would be to do what works once, a second time.  Instead, this may conflict with other constructs of the language, such as main() or having to create threads.  See more here.

Thursday, March 7, 2019

Talk: Concurrent Data Structures for Non-Volatile Memory

Today, Michal Friedman, gave a talk on Concurrent Data Structures for Non-Volatile Memory.

Future systems will contain non-volatile memory.  This is memory that exhibits normal DRAM characteristics, but can maintain its contents even across power failures.  In current systems, caches update memory on either evictions and flushes.  Flushes, however, impose overhead due to the memory access time and overriding the write-back nature of most caches.

Linearizability is one definition for concurrency governing the observation of the operations.  This can be extended to durable linearizability being on a durable system, such that data is flushed before global visibility (initialization), flush prior operations (dependence), and persist operations before they complete (completion).  But a further extension is required to know when a sequence of operations are complete, beyond just taking snapshots of the memory state.

Relaxed, durable, and log versions of lock-free queue that extend Michael and Scott's baseline queue implementation.  Each version provides stronger guarantees: relaxed are the existing augmented with a sync operation to snapshot state, durable preserves the data structure across failures, log identifies the specific state.  The main guarantee is that the data structure will be consistent for any set of thread crashes, which is stronger than the lock-free guarantee.

We do this by extending the prior lock-free versions that include memory flushes of key state, and that later update which see volatile state will flush that state before completing their operations.  This meets the durable linearizability.  And can be extended by also have a log of operations that are updated and maintained before the operations themselves execute.  These logs are per-thread, so as to be unordered and to be individually stateful.

The relaxed version implements sync by creating a special object that indicates a snapshot is occurring.  If other concurrent operations find this object, they take over the snapshot and continue persisting the state before completing its own operation.  Thus a snapshot does not block other operations, but still occurs at that point in the sequence of operations.

Based on performance measurements, the relaxed performs similar to the baseline implementation, while the durable and log-based implementations run slower than the relaxed but with similar performance.

Finally, TSO provides us a guarantee that the stores will reach the cache line in a desired order and not require flushing between writes.

Saturday, March 2, 2019

Conference Attendance - SIGCSE 2019 - Day 2.5

Continuing at SIGCSE, here are several more paper talks that I attended on Friday.  Most of the value at SIGCSE comes from the friendly conversations with other attendees.  From 5-11p, I was in the hotel lobby talking with faculty and students.  Discussing research ideas, telling our stories from teaching, and generally renewing friendships within the field.

On the Effect of Question Ordering on Performance and Confidence in Computer Science Examinations
On the exams, students were offered a bonus if they could predict their score by within 10%.  Does the order of questions (easy -> hard, or hard -> easy) have any impact on their estimated or actual performance on an exam.  Students overpredicted by over 10% on the exams.  As a whole, the hard to easy students did worse, but this result was not statistically significant.  A small improvement is gained for women when the exams start with the hardest problem.

I wonder about whether students were biased in their prediction based on the reward.  Ultimately, the authors gave the reward to all students regardless of the quality of their prediction.

The Relationship between Prerequisite Proficiency and Student Performance in an Upper-Division Computing Course
We have prerequisites to ensure that students are prepared for the later course, an upper-level data structures class.  Students started on average with 57% of expected prerequisite knowledge, and will finish the course with an improvement of 8% on this knowledge.  There is a correlation between prerequisite score and their final score.  With several prerequisites, some knowledge concepts has greater correlation than others.  Assembly is a surprising example of a concept that relates.  Students benefit from intervention that addresses these deficiencies early in the term.

Afterward, we discussed that this work did not explore what prerequisite knowledge weakly correlated with student learning.  How might we better understand what prerequisites actually support the learning in a course?  Furthermore, can we better understand the general background of students in the course, such as class standing or general experience?

Visualizing Classic Synchronization Problems
For three classic synchronization problems: dining philosophers, bounded producer-consumer, and readers and writers.  Each one has a window displaying the operations, as well as multiple algorithmic strategies.  With these visualizations, do students learn better and also find them more engaging than reading about the problems in the textbook.  While not statistically significant, the control group exhibited better recall, although the visualization group had higher engagement.  That said, the control group exhibited higher course grades, so the difference in learning may actually be from unrelated factors.

Friday, March 1, 2019

Conference Attendance: SIGCSE 2019 - Day 1.5

Back at SIGCSE again, this one the 50th to be held.  Much of my time is spent dashing about and renewing friendships.  That said, I made it to several sessions.  I've included at least one author and linked to their paper.

Starting on day 2, we begin with the Keynote from Mark Guzdial

"The study of computers and all the phenomena associated with them." (Perlis, Newell, and Simon, 1967).  The early uses of Computer Science were proposing its inclusion in education to support all of education (1960s).  For example, given the equation "x = x0 + v*t + 1/2 a * t^2", we can also teach it as a algorithm / program.  The program then shows the causal relation of the components.  Benefiting the learning of other fields by integrating computer science.

Do we have computing for all?  Most high school students have no access, nor do they even take the classes when they do.

Computing is a 21st century literacy.  What is the core literacy that everyone needs?  C.f. K-8 Learning Trajectories Derived from Research Literature: Sequence, Repetition, Conditionals.  Our goal is not teaching Computer Science, but rather supporting learning.

For example, let's learn about acoustics.  Mark explains the straight physics.  Then he brings up a program (in a block-based language) that can display the sound reaching the microphone.  So the learning came from the program, demonstration, and prediction.  Not from writing and understanding the code itself.  Taking data and helping build narratives.

We need to build more, try more, and innovate.  To meet our mission, "to provide a global forum for educators to discuss research and practice related to the learning and teaching of computing at all levels."

Now for the papers from day 1:

Lisa Yan - The PyramidSnapshot Challenge

The core problem is that we only view student work by the completed snapshots.  Extended Eclipse with a plugin to record every compilation, giving 130,000 snapshots from 2600 students.  Into those snapshots, they needed to develop an automated approach to classifying the intermediate snapshots.  Tried autograders and abstract syntax trees, but those could not capture the full space.  But!  The output is an image, so why not try using image classification.  Of the 138531 snapshots, they generated 27220 images.  Lisa then manually labeled 12000 of those images, into 16 labels that are effectively four milestones in development.  Then, a neural network classifier classified the images.  Plot the milestones using a spectrum of colors (blue being start, red being perfect).  Good students quickly reach the complete milestones.  Struggling students are often in early debugging stages.  Tinkering students (~73 percentile on exams) take a lot of time, but mostly spend it on later milestones.  From these, we can review assignments and whether students are in the declared milestones, or if other assignment structure is required.

For the following three papers, I served as the session chair.

Tyler Greer - On the Effects of Active Learning Environments in Computing Education

Replication study on the impact of using an active learning classroom versus traditional room.  Using the same instructor to teach the same course, but using different classrooms and lecture styles (traditional versus peer instruction).  The most significant factor was the use of active learning versus traditional, with no clear impact from the type of room used.

Yayjin Ham, Brandon Myers - Supporting Guided Inquiry with Cooperative Learning in Computer Organization

Taking a computer organization course with peer instruction and guided inquiry, can the peer instruction be traded for cooperative learning to emphasize further engagement and learning.  Exploration of a model (program, documentation), then concept invention (building an understanding), then application (apply the learned concepts to a new problem).  Reflect on the learning at the end of each "lecture".  In back-to-back semesters, measure the learning gains from this intervention, as well as survey on other secondary items (such as, engagement and peer support).  However, the students in the intervention group did worse, most of which is controlled by the prior GPA.  And across the other survey points, students in the intervention group rated lower.  The materials used are available online.

Aman, et al - POGIL in Computer Science: Faculty Motivation and Challenges

Faculty try implementing POGIL in the classroom.  Start with training, then implementing in the classroom, and continued innovation.  Faculty want to see more motivation, retaining the material, and staying in the course (as well as in the program).  Students have a mismatch between their learning and their perceived learning.  There are many challenges and concerns from faculty about the costs of adoption.