Tuesday, November 10, 2015

CSE Distinguished Lecture - Professor David Patterson - Instruction Sets want to be Free: The Case for RISC-V

Similar to every other talk on Computer Architecture, first we need to revisit history.  Only by knowing from where we came, do we envision where to go.

History of ISA:
IBM/360 was proposed to unify the diverse lines of mainframes.  Slowly the ISAs started adding more instructions to support more things (see below).

Intel 8086 was a crash ISA design program to cover for their original ISA design that was delayed.  Then IBM wanted to adopt the Motorola 68000, but the chip was late, so the IBM PC used 8088s.

In the 1980s, did a study that found that if the compiled code only used simple instructions, then the programs ran faster than using all of the instructions.  Why not design a simple ISA?

RISC (Reduced Instruction Set Computing) was that ISA.  The processor is simpler and faster.  Secretly, all processors are now RISC (internally), for example, the Intel and AMD processors translate from their x86 ISAs into their internal RISC ISA.

Maybe several simple instructions together could execute together, so the architecture could be simplified further and the compiler can find these instructions rather than spending time and energy when the program is running.  This ISA is VLIW (very long instruction word), where many simple instructions are merged into the long instruction.

Open ISA:
Computer Architecture is reaching certain limits such that processor gains will soon come from custom and dedicated logic.  IP issues limit the ability to do research on ISAs.  We are forced to write simulators that may or may not mimic actual hardware behavior.

Proprietary ISAs are continuing to grow in size, about 2 instructions per month.  This provides the marketing reason to purchase the new cores, rather than just the architectural improvements.

Instead, let's develop a modular ISA using many of the ideas from existing designs.  For example, atomic instructions for both fetch-and-op, as well as load link / store conditional.  Or, compressed instruction format so certain instructions can use a 16-bit format rather than 32-bits (see ARM).

RISC-V has support for the standard open-source software: compilers (gcc, LLVM), Linux, etc.  It also provides synthesizable core designs, simulators, etc.

Tuesday, November 3, 2015

PhD Defense Samantha Lo - Design and Evaluation of Virtual Network Migration Mechanisims on Shared Substrate

PhD Candidate Samantha Lo defended her dissertation work today.

Virtual networks (VNs) may require migration due to maintenance, resource balancing, or hardware failures.  Migration occurs when the assignment of virtual to physical network resources changes.  VN assignment has two aspects: policy of where to assign, and mechanism of how to do so.  Similar questions exist for migration or changing the assignment.

This dissertation work will focus on this last piece of the mechanism of how to change the assignment.  When migrating virtual network nodes, the policy aspect has identified that a migration should occur and to where it should now be placed.

Chapter 3 explored scheduling algorithms in a simulated environment, where layers 2 and 3 can be changed.  The goal is to determine a migration schedule that will minimize the overhead / disruption of the migration and the time required to perform the migration.  For example, Local Minimum Cost First (LMCF) selects one node at a time to migrate.  In contrast, Maximal Independent Set tries to identify multiple nodes to move at once to reduce the time to migrate.

Chapter 4 explored actual implementation in PlanetLab where there is access to layer 3.  Virtual networks are placed within PlanetLab.  When the network migrates, it experienced up to 10% packet loss.  However, if the gateways for the VNs can be synchronized to migrate closer in time, then the loss is lessened.

Chapter 5 addressed the performance issues raised in the previous work through transport and application layer changes.  When a VN migrates, the new location may have different physical characteristics.  Analysis of the TCP traffic showed that on migration, the packet transmission rates dropped dramatically as the window size fell.  How can this be avoided:

1) Controller notifies the applications to delay packet transmission to avoid packet loss.
2) Gateway pauses and buffers traffic.

Under the latter scheme, the gateway fools the user into thinking that the TCP connection is still working when it is instead being buffered.  Furthermore, the network is also using Split TCP, such that each "->" is a separate connection in user->gateway->gateway->user.  The Split TCP hides the RTT from the user, which potentially permits the gateway to send data faster on resume.

After the command to pause data transmission is sent, the system must wait a small amount of time before actually migrating the network.  Otherwise, there are packets in flight that will be lost as the network migrates.  These packets will force TCP to attempt to retransmit using its exponential backoff.  This backoff can then delay the resumption of data transmission after migration imposing additional overhead.  By placing a delay between pausing and migrating, the network is quiesced and will resume more quickly.