compiler construction
play

Compiler Construction Compiler Construction 1 / 87 Mayer Goldberg \ - PowerPoint PPT Presentation

Compiler Construction Compiler Construction 1 / 87 Mayer Goldberg \ Ben-Gurion University Wednesday 22 nd January, 2020 Mayer Goldberg \ Ben-Gurion University Chapter 10 Goals Compiler Construction 2 / 87 Asynchronous Computing


  1. Compiler Construction Compiler Construction 1 / 87 Mayer Goldberg \ Ben-Gurion University Wednesday 22 nd January, 2020 Mayer Goldberg \ Ben-Gurion University

  2. Chapter 10 Goals Compiler Construction 2 / 87 ☞ Asynchronous Computing ▶ Coroutines, Threads & processes ▶ Context switching & tail-position ▶ The two-thread architecture Mayer Goldberg \ Ben-Gurion University

  3. Asynchronous Computing Compiler Construction 3 / 87 ▶ α- — the prefjx “not”, “un-” ▶ σῠν- — the prefjx “with”, “together” ▶ χρόνος — the Greek God of time; time ▶ συνχρόνος — in time, in order, in phase, in step ▶ Synchronous computing means sequential computing ▶ Asynchronous computing means non-sequential computing, or computing in parallel Mayer Goldberg \ Ben-Gurion University

  4. Asynchronous Computing ( continued ) computing mechanisms that can operate concurrently: More than one ALU, core, processor, etc. Compiler Construction 4 / 87 ▶ True asynchronous computing requires truly independent ▶ Asynchronous computing is often simulated or augmented through interleaving computation Mayer Goldberg \ Ben-Gurion University

  5. Asynchronous Computing ( continued ) How to interleave computation Compiler Construction 5 / 87 ▶ We begin with several, independent computations: COMPUTATION A COMPUTATION B COMPUTATION C Mayer Goldberg \ Ben-Gurion University

  6. Asynchronous Computing ( continued ) How to interleave computation ( cont ) Compiler Construction 6 / 87 ▶ Each computation is split into small, sequential chunks: A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 Mayer Goldberg \ Ben-Gurion University

  7. Asynchronous Computing ( continued ) How to interleave computation ( cont ) Compiler Construction capabilities than the hardware can ofger combined, to create a user-experience of greater asynchronous creating an illusion of true, parallel computation ⋯ 7 / 87 performed in sequence: ▶ The chunks of the difgerent computations are interleaved, and A1 B1 C1 A2 B2 C2 A3 B3 C3 ☞ Often interleaved and true asynchronous computing are ☞ The transition between chunks of difgerent computations is known as task-switching, or context-switching Mayer Goldberg \ Ben-Gurion University

  8. Task Switching There are difgerent ways to perform task-switching: control voluntarily and explicitly, by means of specifjc operators task will continue indefjnitely, often hanging the system, or until it terminates system used on Macintosh computers before the advent of OSX Compiler Construction 8 / 87 ① Coöperative multitasking ▶ Under coöperative multitasking, each task must relinquish ▶ If it fails to do so, either by design or because of a bug, the ▶ Coöperative multitasking was used in Mac OS, the operating Mayer Goldberg \ Ben-Gurion University

  9. Task Switching There are difgerent ways to perform task-switching: control from the current task and pass it onto another systems running on modern hardware multitasking, also known as coöperative multitasking… Compiler Construction 9 / 87 ② Pre-emptive multitasking ▶ Special hardware (e.g., a timer-interrupt facility) is used to wrest ▶ No coöperation is required of the current task ▶ Generally, this kind of task-switching cannot be prevented ▶ This is how task-switching is implemented on modern operating ▶ The opposite of pre-emptive multitasking is non-pre-emptive Mayer Goldberg \ Ben-Gurion University

  10. Task Switching There are difgerent ways to perform task-switching: processed/transformed/manipulated by the compiler, prior to execution, so as to relinquish control explicitly task-switching instrumentation: automatically Compiler Construction 10 / 87 ③ Multitasking through instrumentation ▶ Instrumentation means that the code is ▶ As with preemption, instrumented code cannot afgect or prevent ☞ The technique we shall present in this chapter is a form of ▶ We show how to do it manually ▶ Since the transformation is algorithmic, it can be done Mayer Goldberg \ Ben-Gurion University

  11. Chapter 10 Goals Compiler Construction 11 / 87 🗹 Asynchronous Computing ☞ Coroutines, Threads & processes ▶ Context switching & tail-position ▶ The two-thread architecture Mayer Goldberg \ Ben-Gurion University

  12. Asynchronous Tasks Asynchronous tasks are distinguished by the kinds of information shared among difgerent tasks: heap the same heap Compiler Construction 12 / 87 ▶ Coroutines: All coroutines share the same stack and the same ▶ Threads: Each thread has its own stack, but all threads share ▶ Processes: Each process has its own stack & heap ☞ In this presentation, we focus on threads Mayer Goldberg \ Ben-Gurion University

  13. Chapter 10 Goals Compiler Construction 13 / 87 🗹 Asynchronous Computing 🗹 Coroutines, Threads & processes ☞ Context switching & tail-position ▶ The two-thread architecture Mayer Goldberg \ Ben-Gurion University

  14. Context-Switching in Threads & Tail-Position continuations to model the stack: Compiler Construction coroutines how we know that we’ve implemented threads rather than continuations (i.e., stack) of another thread, which is precisely is implemented using a continuation into tail-position tail-position, so they either do not require the stack, or they use non-builtin calls (i.e., calls to user-defjned procedures) be in frames of one thread never intermix with those of other threads stacking up activation frames 14 / 87 ▶ Each thread may consist of procedures calling each other, and ▶ Since each thread comes with its own stack, the activation ▶ Rather than implement several stacks, we require that all ▶ We use the CPS-transformation to convert all user-defjned code ▶ By applying the CPS-transformation, the thread-specifjc stack ▶ Because of lexical scope, one thread cannot access the Mayer Goldberg \ Ben-Gurion University

  15. Chapter 10 Goals Compiler Construction 15 / 87 🗹 Asynchronous Computing 🗹 Coroutines, Threads & processes 🗹 Context switching & tail-position ☞ The two-thread architecture ▶ Instrumenting code for the 2-thread architecture ▶ Racing & termination ▶ Detecting circularity ▶ Prioritization ▶ Threads & Types in Ocaml Mayer Goldberg \ Ben-Gurion University

  16. The two-thread architecture thread t : Compiler Construction argument, and invokes it after some simple, atomic operation thread that continues the original computation 16 / 87 number of threads can be interleaved meant to imply that it is limited to two threads: In fact, any ▶ We present a very simple, 2-thread architecture, that we are tempted to call thread-passing style (TPS) ▶ The name “2-thread architecture” refers to the API, and is not ▶ A thread is implemented as a procedure that takes a single ▶ The thread performs some simple, atomic operation, after which the thread t is applied (in tail-position) to another ▶ The thread variable t is always used as a parameter ▶ Each procedure & continuation takes a thread as an additional Mayer Goldberg \ Ben-Gurion University

  17. Chapter 10 Goals Compiler Construction 17 / 87 🗹 Asynchronous Computing 🗹 Coroutines, Threads & processes 🗹 Context switching & tail-position ☞ The two-thread architecture ☞ Instrumenting code for the two-thread architecture ▶ Racing & termination ▶ Detecting circularity ▶ Prioritization ▶ Threads & Types in Ocaml Mayer Goldberg \ Ben-Gurion University

  18. Instrumenting code We now present some simple examples of code that is instrumented to run as a thread: to instrumented code, we shall consider complete examples, i.e., schedule threads to run concurrently Compiler Construction 18 / 87 ▶ The code is just the instrumented code ▶ It does not come with any code to invoke it ▶ Later, after we are profjcient in converting Scheme source code Mayer Goldberg \ Ben-Gurion University

  19. Instrumenting code ( continued ) Example 1: (lambda (x) (f (g x))) We start by converting the code to CPS: ( lambda (x k) (g$ x ( lambda (rog) (f$ rog k)))) Compiler Construction 19 / 87 Mayer Goldberg \ Ben-Gurion University

  20. Instrumenting code ( continued ) Example 1: (lambda (x) (f (g x))) We now instrument the code to run as a thread: ( lambda (x k t) (t ( lambda (t) (g$ x ( lambda (rog t) (t ( lambda (t) (f$ rog k t)))) t)))) Compiler Construction 20 / 87 Mayer Goldberg \ Ben-Gurion University

  21. Instrumenting code ( continued ) Example 2: (lambda (x y) (f (g x) (h y))) ( lambda (x y k) (g$ x ( lambda (rog) (h$ y ( lambda (roh) (f$ rog roh k)))))) Compiler Construction 21 / 87 ▶ As before, we convert the code to CPS ▶ We choose, arbitrarily, to start with the application (g x) Mayer Goldberg \ Ben-Gurion University

  22. Instrumenting code ( continued ) Example 2: (lambda (x y) (f (g x) (h y))) Compiler Construction t)))) t)))) (f$ rog roh k t)))) (t ( lambda (t) ( lambda (roh t) (h$ y (t ( lambda (t) ( lambda (rog t) (g$ x (t ( lambda (t) ( lambda (x y k t) thread: We instrument the code to take and pass control onto another 22 / 87 Mayer Goldberg \ Ben-Gurion University

  23. Instrumenting code ( continued ) Example 3: ( lambda (x) ( if (w? x) (f x) (g (h x)))) We start by converting the code to CPS: ( lambda (x k) (w?$ x ( lambda (row?) ( if row? (f$ x k) (h$ x ( lambda (roh) (g$ roh k))))))) Compiler Construction 23 / 87 Mayer Goldberg \ Ben-Gurion University

Recommend


More recommend