constructible sheaves and their cohomology for
play

Constructible sheaves and their cohomology for asynchronous logic - PowerPoint PPT Presentation

Constructible sheaves and their cohomology for asynchronous logic and computation 14 January 2010 Michael Robinson Acknowledgements This is a preliminary report on progress in a larger project on applied sheaf theory More substantial


  1. Constructible sheaves and their cohomology for asynchronous logic and computation 14 January 2010 Michael Robinson

  2. Acknowledgements  This is a preliminary report on progress in a larger project on applied sheaf theory  More substantial results are to come!  It's joint work with  Robert Ghrist (Penn)  Yasu Hiraoka (Hiroshima)  The focus is on logic here, but is part of  AFOSR MURI on Information Dynamics in Networks  PI: Rob Calderbank (Princeton)

  3. Logic gates OR AND “bubble” indicates negation NAND NOR 1 0 NOT 3

  4. Logic gates OR AND NAND NOR A change occurs... 0 0 NOT 4

  5. Logic gates Propagation delay varies from device to device ... eventually changes the output 0 1 NOT 5

  6. Problem: time-bound logic  Propagation delays along connections and within gates!  Feedback – can hold state  Race conditions:  Hazards  Glitches  Oscillations  Lock­ups 6

  7. Example of timebound logic 0 0 1 Data Output 1 1 0 Enable This is an E flip­flop circuit, a basic memory element. It's initially storing the value 0 7

  8. Example of timebound logic 0 1 1 Data Output 1 1 0 Enable If we change the Data input to 1, nothing exciting happens... 8

  9. Example of timebound logic 0 1 1 Data Output 1 1 1 Enable Pulsing the Enable input to 1 causes the Data input to be “read” and “stored”... 9

  10. Example of timebound logic 0 1 0 Data Output 1 1 1 Enable ... but it takes time... t=1 10

  11. Example of timebound logic 1 1 0 Data Output Can de­ enable at this time 1 1 0 Enable ... but it takes time... t=2 11

  12. Example of timebound logic 1 1 1 Data Output 0 1 0 Enable ... but it takes time... t=3 12

  13. Example of timebound logic 1 0 1 Data Output Data is now ignored 0 1 0 Enable ... and will hold the new value! 13

  14. Synchronous design  Can avoid race conditions by polling after transients are finished  Unavoidable limitation: limited by the slowest circuit  Synchronous solution: circuits poll their inputs only at specific points in time – a global clock  But...  Biggest single drain of power in modern CPUs is the clock  Clock distribution and skew a major problem  Correcting clock skew requires additional circuitry and 14 power usage

  15. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock Var 1 A Computation Memory Var 2 BUS 15 Output

  16. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock Var 1 A Computation Memory Var 2 BUS 16 Output

  17. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock Var 1 B Computation Memory Var 2 BUS 17 Output

  18. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock B Var 1 Computation Memory Var 2 BUS 18 Output

  19. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock Var 1 Computation Memory Var 2 BUS 19 Output

  20. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock Var 1 Computation Memory Var 2 BUS 20 Output

  21. Example logic timeline (synchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Clock Var 1 A+B Computation Memory Var 2 BUS 21 Output

  22. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 A Computation Memory Var 2 BUS 22 Output

  23. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 A Computation Memory Var 2 BUS 23 Output

  24. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 A Computation Memory Var 2 BUS 24 Output

  25. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 B Computation Memory Var 2 BUS 25 Output

  26. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done B Var 1 Computation Memory Var 2 BUS 26 Output

  27. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 Computation Memory Var 2 BUS 27 Output

  28. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 Computation Memory Var 2 BUS 28 Output

  29. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 A+B Computation Memory Var 2 BUS 29 Output

  30. Example logic timeline (asynchronous) Time Read from memory Write BUS A B A+B Var 1 A Var 2 B A+B Output Mem TX CPU Ack CPU TX Done Var 1 A+B Computation Memory Var 2 BUS 30 Output

  31. Asynchronous design  Typical of older bus architectures and of networks  Potential for significant power savings, space­on­ die, and speed in certain areas  Potential for better distribution of computation  Design elegance: fewer transistors needed, less to break  Network communication becomes more natural  Especially when latency is highly variable 31

  32. Problems!  Asynchronous circuits are hard to design!  If you mistake a transient for the “final answer” of a circuit, you're faced with  Hazards (uncertainties in output value)  Glitches (very short pulses, which might confuse the underlying electronic technology)  Lock­ups (finite state machines getting stuck in a state where they cannot exit)  Generally, all are the result of race conditions 32

  33. Example of a glitch A C Input signal B A Output signal B C Glitch is one propagation delay wide Race condition between A and B causes glitch! 33

  34. Limitations in current methods  Traditional asynchronous design requires either  Very careful and exhaustive reasoning (time­dependent theorem­provers, concurrency theory), or  Detailed high­fidelity simulation (at sampling rate determined by the “GCD” of the propagation speeds)  Bookkeeping is difficult, but essential  Difficult to test in stages, especially in testing response of circuitry to glitches  Exhaustive simulation is essentially impossible for large designs ( e.g. CPUs) 34

  35. Sheaf theory in logic circuits  Provides some computational and conceptual tools  It's primarily a bookkeeping mechanism  Building­up local models (gates and wires) into global ones (computational units)  The primary tool for this local­to­global transition is called cohomology  Sheaf cohomology organizes the computations effectively, and extracts lots of information!  Hierarchical design can be examined by local sheaf cohomology and sheaf direct image functors 35

  36. Past work A decidedly non­exhaustive list of some highlights:  Sheaves over categories of interacting objects  Bacławski, Goguen (1970s)  Concurrency & sheaf theory (not cohomological)  Lillius (1993), Van Glabbeek (2006)  Constructible sheaves  Rota, Shapira, MacPherson (1960s)  Quantum graphs (original motivating example)  Gutkin, Smilanski (2001), Kuchment (2003) 36  Our focus is more strongly on cohomology

  37. Sheaves: definition A sheaf on a topological space X consists of  A contravariant functor F from Open ( X ) to some subcategory of Set ; this is a “sheaf of sets”  F ( U ) for open U is called the space of sections over U  The inclusion map U  V is sent to a restriction map F ( V )  F ( U ). Usually it is the restriction of functions.  Given a point p ∈ X , the direct limit of F ( U  ), for all U  satisfying p ∈ U  is called the stalk at p . It's a generalization of the germ of a smooth function  And a gluing rule... 37

  38. Sheaves: gluing  The gluing rule: if U and V are open sets, then two sections defined on U and V that agree on U ∩ V come from a unique section defined on U ∪ V 38

  39. Sheaves: gluing  The gluing rule: if U and V are open sets, then two sections defined on U and V that agree on U ∩ V come from a unique section defined on U ∪ V Base topological space X 39

  40. Sheaves: gluing  The gluing rule: if U and V are open sets, then two sections defined on U and V that agree on U ∩ V come from a unique section defined on U ∪ V F ( U ) F Base topological space X U 40

  41. Sheaves: gluing  The gluing rule: if U and V are open sets, then two sections defined on U and V that agree on U ∩ V come from a unique section defined on U ∪ V F ( U ) F ( V ) F F Base topological space X U V 41

Recommend


More recommend