cs502 compiler design code optimization manas thakur
play

CS502: Compiler Design Code Optimization Manas Thakur Fall 2020 - PowerPoint PPT Presentation

CS502: Compiler Design Code Optimization Manas Thakur Fall 2020 Fast. Faster. Fastest? Character stream Machine-Independent Lexical Analyzer Lexical Analyzer Code Optimizer B a c k e n d Token stream Intermediate representation F r o n


  1. CS502: Compiler Design Code Optimization Manas Thakur Fall 2020

  2. Fast. Faster. Fastest? Character stream Machine-Independent Lexical Analyzer Lexical Analyzer Code Optimizer B a c k e n d Token stream Intermediate representation F r o n t e n d Syntax Analyzer Code Generator Syntax Analyzer Code Generator Syntax tree Target machine code Machine-Dependent Semantic Analyzer Semantic Analyzer Code Optimizer Syntax tree Target machine code Intermediate Intermediate Symbol Code Generator Code Generator Table Intermediate representation Manas Thakur CS502: Compiler Design 2

  3. Role of Code Optimizer ● Make the program better – time, memory, energy, ... ● No guarantees in this land! – Will a particular optimization for sure improve something? – Will performing an optimization affect something else? – In what order should I perform the optimizations? – What “scope” to perform certain optimization at? – Is the optimizer fast enough? ● Can an optimized program be optimized further? Manas Thakur CS502: Compiler Design 3

  4. Full employment theorem for compiler writers ● Statement: There is no fully optimizing compiler. ● Assume it exists: – such that it transforms a program P to the smallest program Opt(P) that has the same behaviour as P . – Halting problem comes to the rescue: ● Smallest program that never halts: L1: goto L1 – Thus, a fully optimizing compiler could solve the halting problem by checking if a given program is L1: goto L1 ! – But HP is an undecidable problem. – Hence, a fully optimizing compiler can’t exist! ● Therefore we talk just about an optimizing compiler. – and keep working without worrying about future prospects! Manas Thakur CS502: Compiler Design 4

  5. How to perform optimizations? ● Analysis – Go over the program – Identify some properties ● Potentially useful properties ● Transformation – Use the information computed by the analysis to transform the program ● without affecting the semantics ● An example that we have (not literally) seen : – Compute liveness information – Delete assignments to variables that are dead Manas Thakur CS502: Compiler Design 5

  6. Classifying optimizations ● Based on scope: – Local to basic blocks – Intraprocedural – Interprocedural ● Based on positioning: – High-level (transform source code or high-level IR) – Low-level (transform mid/low-level IR) ● Based on (in)dependence w.r.t. target machine: – Machine independent (general enough) – Machine dependent (specific to the architecture) Manas Thakur CS502: Compiler Design 6

  7. May versus Must information ● Consider the program: if (c) { ● Which variables may be assigned? a = ... b = ... – a, b, c } else { a = ... ● Which variables must be assigned? c = ... } – a ● May analysis: – the computed information may hold in at least one execution of the program. ● Must analysis: – the computed information must hold every time the program is executed. Manas Thakur CS502: Compiler Design 7

  8. Many many optimizations ● Constant folding, constant propagation, tail-call elimination, redundancy elimination, dead code elimination, loop-invariant code motion, loop splitting, loop fusion, strength reduction, array scalarization, inlining, synchronization elision, cloning, data prefetching, parallelization . . . etc . . ● How do they interact? – Optimist: we get the sum of all improvements. – Realist: many are in direct opposition. ● Let us study some of them! Manas Thakur CS502: Compiler Design 8

  9. Constant propagation ● Idea: – If the value of a variable is known to be a constant at compile-time, replace the use of the variable with the constant. n = 10; n = 10; c = 2; c = 2; for (i=0; i<n; ++i) for (i=0; i<10; ++i) s = s + i * c; s = s + i * 2; – Usually a very helpful optimization – e.g., Can we now unroll the loop? ● Why is it good? ● Why could it be bad? – When can we eliminate n and c themselves? ● Now you know how well different optimizations might interact! Manas Thakur CS502: Compiler Design 9

  10. Constant folding ● Idea: – If operands are known at compile-time, evaluate expression at compile-time. r = 3.141 * 10; r = 31.41; – What if the code was? PI = 3.141; Constant propagation r = PI * 10; – And what now? Constant folding PI = 3.141; r = PI * 10; Called partial evaluation d = 2 * r; Manas Thakur CS502: Compiler Design 10

  11. Common sub-expression elimination ● Idea: – If program computes the same value multiple times, reuse the value. t = b + c; a = b + c; a = t; c = b + c; c = t; d = b + c; d = b + c; – Subexpressions can be reused until operands are redefined. Manas Thakur CS502: Compiler Design 11

  12. Copy propagation ● Idea: – After an assignment x = y , replace the uses of x with y . x = y; x = y; if (y > 1) if (x > 1) s = y + f(y); s = x + f(x); – Can only apply up to another assignment to x , or ... another assignment to y ! – What if there was an assignment y = z earlier? ● Apply transitively to all assignments. Manas Thakur CS502: Compiler Design 12

  13. Dead-code elimination ● Idea: – If the result of a computation is never used, remove the computation. x = y + 1; y = 1; y = 1; x = 2 * z; x = 2 * z; – Remove code that assigns to dead variables. ● Liveness analysis done before would help! – This may, in turn, create more dead code. ● Dead-code elimination usually works transitively. Manas Thakur CS502: Compiler Design 13

  14. Unreachable-code elimination ● Idea: – Eliminate code that can never be executed #define DEBUG 0 if (DEBUG) print( “ Current value = " , v); – High-level: look for if (false) or while (false) ● perhaps after constant folding! – Low-level: more difficult ● Code is just labels and goto s ● Traverse the CFG , marking reachable blocks Manas Thakur CS502: Compiler Design 14

  15. Next class ● Next class: – How to perform the optimizations that we have seen using a dataflow analysis? ● Starting with: – The back-end fullform of CFG ! ● Approximately only 10 more classes left. – Hope this course is being successful in making (y)our hectic days a bit more exciting :-) Manas Thakur CS502: Compiler Design 15

  16. CS502: Compiler Design Code Optimization (Cont.) Manas Thakur Fall 2020

  17. Recall A2 int a; if (*) { a = 10; else { //something that doesn’t touch ‘a’ } x = a; ● Is ‘a’ initialized in this program? – Reality during run-time : Depends – What to tell at compile-time ? ● Is this a ‘must’ question or a ‘may’ question? ● Correct answer: No – How do we obtain such answers? ● Need to model the control-flow Manas Thakur CS502: Compiler Design 17

  18. Control-Flow Graph (CFG) ● Nodes represent instructions; edges represent flow of control a = 0 a = 0 b = a + 1 L1: b = a + 1 c = c + b c = c + b a = b * 2 if a < N goto L1 return c a = b * 2 a < N return c Manas Thakur CS502: Compiler Design 18

  19. 1 a = 0 Some CFG terminology 2 b = a + 1 ● pred[n] gives predecessors of n – pred[1]? pred[4]? pred[2]? 3 c = c + b 4 ● succ[n] gives successors of n a = b * 2 – succ[2]? succ[5]? 5 a < N ● def(n) gives variables defined by n 6 return c – def(3) = {c} ● use(n) gives variables used by n – use(3) = {b, c} Manas Thakur CS502: Compiler Design 19

  20. Live ranges revisited ● A variable is live, if its current value may be used in future. – Insight: 1 a = 0 ● work from future to past ● backward over the CFG 2 b = a + 1 3 ● Live ranges: c = c + b – a: {1->2, 4->5->2} 4 a = b * 2 – b: {2->3, 3->4} – c: All edges except 1->2 5 a < N 6 return c Manas Thakur CS502: Compiler Design 20

  21. Liveness ● A variable v is live on an edge if there is a 1 a = 0 directed path from that edge to a use of v that does not go through any def of v . 2 b = a + 1 ● A variable is live-in at a node if it is live on any 3 c = c + b of the in-edges of that node. 4 a = b * 2 ● A variable is live-out at a node if it is live on any of the out-edges of that node. 5 a < N 6 ● Verify: return c – a: {1->2, 4->5->2} – b: {2->4} Manas Thakur CS502: Compiler Design 21

  22. Computation of liveness ● Say live-in of n is in[n] , and live-out of n is out[n] . ● We can compute in[n] and out[n] for any n as follows: in[n] = use[n] ∪ (out[n] – def[n]) ∀ ∈ succ[n] ∪ in[s] out[n] = s Called fmow functions. Called datafmow equations. Manas Thakur CS502: Compiler Design 22

  23. Liveness as an iterative dataflow analysis IDFA for each n Initialize in[n] = {}; out[n] = {} repeat Save previous values for each n in’[n] = in[n]; out’[n] = out[n] ∪ in[n] = use[n] (out[n] – def[n]) Compute new values ∀ ∈ ∪ out[n] = s succ[n] in[s] until in’[n] == in[n] and out’[n] == out[n] ∀ n ● Repeat till fjxed-point Manas Thakur CS502: Compiler Design 23

  24. Fixed point Liveness analysis example 1 4 a = 0 a = b * 2 in[n] = use[n] ∪ (out[n] – def[n]) 2 5 b = a + 1 ∀ ∈ succ[n] ∪ in[s] a < N out[n] = s 3 6 c = c + b return c Manas Thakur CS502: Compiler Design 24

  25. In backward order 1 a = 0 2 b = a + 1 3 c = c + b 4 a = b * 2 5 a < N 6 return c Fixed point only in 3 iterations! ● Thus, the order of processing statements is important for efficiency. ● in[n] = use[n] ∪ (out[n] – def[n]) ∀ ∈ succ[n] ∪ in[s] out[n] = s Manas Thakur CS502: Compiler Design 25

Recommend


More recommend