haskell to hardware and other dreams
play

Haskell to Hardware and Other Dreams Stephen A. Edwards Richard - PowerPoint PPT Presentation

Haskell to Hardware and Other Dreams Stephen A. Edwards Richard Townsend Martha A. Kim Lianne Lairmore Kuangya Zhai Columbia University Synchron, Bamberg, Germany, December 7, 2016 Popular Science, November 1969 Popular Science, November


  1. Haskell to Hardware and Other Dreams Stephen A. Edwards Richard Townsend Martha A. Kim Lianne Lairmore Kuangya Zhai Columbia University Synchron, Bamberg, Germany, December 7, 2016

  2. Popular Science, November 1969 Popular Science, November 1969

  3. Where Is My Jetpack? Where Is My Jetpack? Popular Science, November 1969 Popular Science, November 1969

  4. Where The Heck Where The Heck Is My Is My 10 GHz Processor? 10 GHz Processor?

  5. Moore’s Law “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.” Closer to every 24 months Gordon Moore, Cramming More Components onto Integrated Circuits , Electronics, 38(8) April 19, 1965.

  6. Four Decades of Microprocessors Later... Source: https://www.karlrupp.net/2015/06/40-years-of-microprocessor-trend-data/

  7. What Happened in 2005? Pentium 4 Core 2 Duo Xeon E5 2000 2006 2012 1 core 2 cores 8 cores Transistors: 42 M 291 M 2.3 G

  8. The Cray-2: Immersed in Fluorinert The Cray-2: Immersed in Fluorinert 1985 1985 ECL ECL 150 kW 150 kW

  9. Heat Flux in IBM Mainframes: A Familiar Trend Schmidt. Liquid Cooling is Back . Electronics Cooling. August 2005.

  10. Liquid Cooled Apple Power Mac G5 Liquid Cooled Apple Power Mac G5 2004 2004 CMOS CMOS 1.2 kW 1.2 kW

  11. Dally: Calculation Cheap; Communication Costly 64b FPU 20mm 0.1mm 2 50pJ/op “Chips are power 1.5GHz limited and most power 10mm 250pJ, 4 cycles 64b 1mm is spent moving data Channel 25pJ/word Performance = Parallelism 64b Off-Chip Channel Efficiency = Locality 1nJ/word 64b Float ing Point Bill Dally’s 2009 DAC Keynote, The End of Denial Architecture

  12. Parallelism for Performance; Locality for Efficiency Dally: “Single-thread processors are in denial about these two facts” We need different programming paradigms and different architectures on which to run them.

  13. Dark Silicon Dark Silicon

  14. Related Work

  15. System- -level Synthesis Data Model level Synthesis Data Model Xilinx’s Vivado (Was xPilot, AutoESL) System SSDM (System (System- -level Synthesis Data Model) level Synthesis Data Model) � SSDM � � Hierarchical Hierarchical netlist netlist of concurrent processes and communication of concurrent processes and communication � channels channels � Each leaf process contains a sequential program which is represe Each leaf process contains a sequential program which is represented nted � by an extended LLVM IR with hardware- by an extended LLVM IR with hardware -specific semantics specific semantics • Port / IO interfaces, bit • Port / IO interfaces, bit- -vector manipulations, cycle vector manipulations, cycle- -level notations level notations SystemC input; classical high-level synthesis for processes Jason Cong et al. ISARS 2005 Hardware- -Specific SSDM Semantics Specific SSDM Semantics Hardware Process port/interface semantics Process port/interface semantics FIFO: FifoRead FifoRead() / () / FifoWrite FifoWrite() () FIFO: Buffer: BuffRead Buffer: BuffRead() / () / BuffWrite BuffWrite() () Memory: MemRead Memory: MemRead() / () / MemWrite MemWrite() () Bit- -vector manipulation vector manipulation Bit Bit extraction / concatenation / insertion Bit extraction / concatenation / insertion Bit- Bit -width attributes for every operation and every value width attributes for every operation and every value Cycle- -level notation level notation Cycle Clock: waitClockEvent Clock: waitClockEvent() () Page 11

  16. Taylor and Swanson’s Conservation Cores C-core BB0 Generation BB1 BB2 Code to Stylized Verilog and Inter-BB CFG through a CAD flow. State Machine .V Datapath Synopsys IC Compiler, P&R, CTS + + + LD LD .V * + + + LD + ST +1 0.01 mm 2 in 45 nm TSMC <N? runs at 1.4 GHz Custom datapaths, controllers for loop kernels; uses existing memory hierarchy Swanson, Taylor, et al. Conservation Cores . ASPLOS 2010.

  17. Bacon et al.’s Liquid Metal Fig. 2. Block level diagram of DES and Lime code snippet JITting Lime (Java-like, side-effect-free, streaming) to FPGAs Huang, Hormati, Bacon, and Rabbah, Liquid Metal , ECOOP 2008.

  18. Goldstein et al.’s Phoenix 1 0 eta 1 merge int squares() sum i 1 { int i = 0, sum = 0; * 10 + for (;i<10;i++) sum += i*i; + <= return sum; } ! sum eta sum i ret Figure 8: Memory access network and implementation of the value 2 3 and token forwarding network. The LOAD produces a data value Figure 3: C program and its representation comprising three hy- consumed by the oval node. The STORE node may depend on the perblocks; each hyperblock is shown as a numbered rectangle. The load (i.e., we have a token edge between the LOAD and the STORE , dotted lines represent predicate values. (This figure omits the token shown as a dashed line). The token travels to the root of the tree, which is a load-store queue (LSQ). edges used for memory synchronization.) C to asynchronous logic, monolithic memory Budiu, Venkataramani, Chelcea and Goldstein, Spatial Computation , ASPLOS 2004.

  19. Ghica et al.’s Geometry of Synthesis com var DELTA com SEQ com WHILE exp ASG SEQ DER exp exp init D X more curr D D T D D f D next D Figure 1. In-place map schematic and implementation Algol-like imperative language to handshake circuits Ghica, Smith, and Singh. Geometry of Synthesis IV , ICFP 2011

  20. Greaves and Singh’s Kiwi public static void SendDeviceID() { int deviceID = 0x76; for ( int i = 7; i > 0; i −− ) { scl = false ; sda out = (deviceID & 64) != 0; Kiwi.Pause(); // Set it i − th bit of the device ID scl = true ; Kiwi.Pause(); // Pulse SCL scl = false ; deviceID = deviceID << 1; Kiwi.Pause(); } } C# with a concurrency library to FPGAs Greaves and Singh. Kiwi , FCCM 2008

  21. Arvind, Hoe, et al.’s Bluespec GCD Mod Rule Gcd ( a , b ) if ( a b ) ! ( b " 0) # Gcd ( a $ b , b ) GCD Flip Rule Gcd ( a , b ) if a b # Gcd ( b , a ) π Flip π Mod π Mod + δ Mod,a ce δ Flip,b π Flip a # δ Flip,a δ # # # Mod,a π Flip π Mod δ b =0 Flip,b δ Flip,a ce π Flip Figure 1.3 Circuit for computing Gcd ( a , b ) from Example 1. Guarded commands and functions to synchronous logic Hoe and Arvind, Term Rewriting , VLSI 1999 O R M od F l ip # # # # $ % &&&% $ n % &&&% $ n O R #

Recommend


More recommend