Showing posts with label GPUs. Show all posts
Showing posts with label GPUs. Show all posts

Monday, October 14, 2019

Conference Attendance - MICRO 52 - Day 1

I am in Columbus Ohio for MICRO 52.  A third of the attendees drove from other "midwestern" universities, of which I am one. 

Keynote: Rejuvenating Computer Architecture Research with Open-Source Hardware

Moore's Law is irrelevant now, as the cost per transistor has held steady since the 28mm technology node.  The cost of any deployment depends on the development cost and only at very large scales, is the cost per transistor dominant.  Given that, how can we reduce the cost of hardware development.

Cambrian explosion of (RISC) ISAs in mid-1980s on with a great diversity of ISAs being created and competing.  Then the Intel Pentium came out, which combined the CISC ISA with a translation into the RISC micro ops.  This extinction event destroyed most of those ISAs.

Why does the instruction set architecture (ISA) matter?  It is the dominant interface in the system, defining the interaction between software and hardware.  But ISAs are currently proprietary, and tied to the fortunes of the company.  Many ISAs have come and gone.  And then each SoC (system on a chip) gets custom ISAs for each accelerator.

So there is now the RISC-V ISA that is open for use and development (which I wrote about here).  The RISC-V foundation was formed in 2015 to be the neutral guardian of the specification and formal model.  Based on this specification, there are both open-source and commercial implementations of the hardware as well as the software ecosystem.

ComputeDRAM: In-Memory Compute Using Off-the-Shelf DRAMsDRAM is designed based on commands being sent in a specific order with appropriate timings.  The oddity is that if specific commands and timings are used that violate the normal usage, then the DRAM module can perform certain operations, such as AND and OR using three specially prepared rows (source x2 and destination).

Hybrid Skiplist: Combining the Best of Near-Data-Processing and Lock-Free Algorithms
This is a student research competition work that I want to highlight.  The work is taking skip-lists, a multi-level linked list to support more efficient traversals, which has been implemented on both near-data processing (NDP) systems as well as lock-free.  The performance of the two implementations is comparable, but we should be able to do better.  The observation is that lock-free gains by having the long, frequently-accessed links in the cache, while NDP gets the data items close.  Therefore, let's combine the two approaches so the algorithm uses the lock-free approach on the long links, and leaves the rest in NDP.  A dynamic approach then adapts which nodes are in the long list and promotes them, while demoting less frequently accessed elements.

Applying Deep Learning to the Cache Replacement ProblemLet's apply machine learning to cache replacement.  Offline, a ML model can perform better than the best replacement schemes, but offline this requires lots of space, more than the cache itself.  Current algorithms (such as Hawkeye) use just the current PC, whereas the observation is that the machine learning model includes history, so perhaps history can have value.  Using this, they analyzed the history further to notice that this history information is not complete nor does it have to be ordered.  If it does not need to be ordered, then the history is a feature list (i.e., bitvector) and not a full list, so the history feature gives an index into a table of predictors for whether a line is cache friendly in usage.

NVBit: A Dynamic Binary Instrumentation Framework for NVIDIA GPUs
This is a release of a Pin-like tool, but for GPUs.  Using the framework, you can write specific instrumentation to be applied to CUDA kernels.  The framework does an analysis of the kernel to find the specific instrumentation points and then recompile / JIT the code to integrate the request types into the kernel without requiring the actual source code for the kernel.  Such types as the specific instructions executed as counts or traces.  And thereby build a simulator or error checker.

Friday, September 14, 2018

Is C low level?

A recent ACM article, C is Not a Low-Level Language, argues that for all of our impressions that C is close to hardware, it is not actually "low-level".  The argument is as follows, C was written for the PDP-11 architecture and at the time, it was a low-level language.  As architectures have evolved, C's machine model has diverged from hardware, which has forced processor design to add new features to attain good performance with C, such as superscalar for ILP and extensive branch prediction. 
Processors must also maintain caches to hide the memory latency, which require significant logic to maintain coherence and the illusion that the memory is shared between the threads of a process.  Furthermore, the compiler is also called upon to find optimization opportunities that may be unsound and definitely require programmer years to implement.

The author repeatedly contrasts with GPUs, while noting that they require very specific problems, or "at the expense of requiring explicitly parallel programs".  If you were not keeping track, a GPU requires thousands of threads to match the CPU's performance. The author calls for, "A processor designed purely for speed, not for a compromise between speed and C support, would likely support large numbers of threads, have wide vector units, and have a much simpler memory model."  Which generally sounds like the GPU design.

I appreciate the callouts to C's shortcomings, which it certainly has.  The notion that C has driven processor design is odd, yet it does reflect the fact that processors are designed to run current programs fast.  And with the programs being written in either C or a language built on C, that forces many programs into particular patterns.  I even spent some time in my PhD studies considering a version of this problem: how do you design a new "widget" for the architecture if no programs are designed for widgets to be available?

All to say, I think C is a low-level language, and while its distance from hardware may be growing, there is nothing else beneath it.  This is a gap that needs to be addressed, and by a language that has explicit parallel support.

Monday, May 1, 2017

Book Review: Multicore and GPU Programming: An Integrated Approach

I wanted to like this book, Multicore and GPU Programming: An Integrated Approach, and be able to use it in the classroom.  I teach a class where we cover OpenMP, MPI, Cilk, and Cuda.  Yet, I would not use this book and I have told the publishers this as well, to which they acknowledged my reasoning.

First, the author elects to use QtThreads for the base parallel programming approach.  He notes that he had considered using pthreads or C++11 thread support, and comments that he rejected C++11 threads for the book as the support was incomplete at the time.  Pthreads is left without comment.  That may be, but I have heard of and used those options, while QtThreads is something that I have never considered.

Second, the book serves as an admirable API reference.  For one or a set of function calls, significant space is dedicated to how that call works and illustrating examples for it.  Structured Parallel Programming also covers many APIs, yet it maintains a feel of being about parallel programming in general rather that the calls specifically.  However, that work covers different API sets so the two are not explicitly comparable.

Third, and this issue is really more for the editors rather than the author, the typesetting on each page is poor.  There is significant white space left bordering the text on each page.  Furthermore, the code wraps, and often not for being long code, but for long comments and comments on the same line as the code.  I understand that in programming these are stylistic choices; however, the impact of finding line wraps from long comments leaves the text looking unprofessional.  I must assume that the author wrote the code separately and then provided for being included into the book, but the editor failed to make allowances for typesetting.

In conclusion, I wanted to like and use the book.  Whenever I speak with the publisher, they always direct me to it, I just have to hope for something else to come along.  Use it as a reference perhaps, but I am cautious in my recommendation.

(This book was provided free by the publisher to review for possible use in the classroom.)

Monday, January 18, 2016

Conference Attendance HiPEAC - Day 1 - MULTIPROG

It is once again, conference time.  For North Americans, this might seem rather early as I am writing from Prague, Czech Republic (or at least when I started 12 hours ago).  I am attending HiPEAC, which is the premier European computer architecture conference.  HiPEAC is a dual-track conference.  Throughout the three days there is the paper-track, where the accepted papers to TACO (such as mine) are presented.  And simultaneously there are workshops.  For the first day, I am starting with the MULTIPROG workshop, which is on Programmability and Architectures for Heterogeneous Multicores.

Let's start with the keynote, given by David Kaeli of Northeastern University.
- Concurrent execution of compute kernels
- Scheduling of kernels, deadlines
- Sharing / access to host memory (i.e., RAM)

The current model of using a GPGPU is that it runs 1 computation kernel; however, there are many problems that would better decompose into several separate kernels.  It would also be valuable if there were further examples of these problems (i.e., benchmarks).  Now, whenever you try running multiple anything on a computational resource, there is a runtime scheduling problem.  Which should run to best complete the overall problem.  A follow-on research question explores this question a cloud-based environment where the GPU may be shared across entirely independent compute kernels.  This requires the kernels to be tagged with IDs to ensure that their memory is kept separate.  All of this sounds as if we need an OS for the GPU.

Following the late-morning break, we heard next from MECCA (MEeting the Challenges in Computer Architecture) - 3Ps: parallelism, power, and performance.  Consider parallel program annotations for describing the concurrency, runtime management of caches using the annotations to indicate the flow of data and transfer the data before it is required and with the appropriate coherence states and indicate when a block is dead and can be evicted from the cache.

Then there was lunch, resting from my flights, then networking, especially the part where I stood by my poster and discussed my research for 3 hours.  Now to rest for day 2.