compilers and computer architecture just in time
play

Compilers and computer architecture: Just-in-time compilation Martin - PowerPoint PPT Presentation

Compilers and computer architecture: Just-in-time compilation Martin Berger 1 December 2019 1 Email: M.F.Berger@sussex.ac.uk , Office hours: Wed 12-13 in Chi-2R312 1 / 1 Recall the function of compilers 2 / 1 Welcome to the cutting edge 3 /


  1. If JIT compilers are the answer ... what is the problem? But ... 29 / 1

  2. If JIT compilers are the answer ... what is the problem? But ... in practise, variables usually do not change their types in inner loops. 30 / 1

  3. If JIT compilers are the answer ... what is the problem? But ... in practise, variables usually do not change their types in inner loops. Why? 31 / 1

  4. If JIT compilers are the answer ... what is the problem? But ... in practise, variables usually do not change their types in inner loops. Why? Because typically innermost loops work on big and uniform data structures (usually big arrays). 32 / 1

  5. If JIT compilers are the answer ... what is the problem? But ... in practise, variables usually do not change their types in inner loops. Why? Because typically innermost loops work on big and uniform data structures (usually big arrays). So the compiler should move the type-checks outside the loops. 33 / 1

  6. If JIT compilers are the answer ... what is the problem? Recall that in dynamically typed languages for ( int i = 0; i < 1000000; i++ ) { for ( int j = 0; i < 1000000; j++ ) { a [i, j] = a[i,j] + 1 } } 34 / 1

  7. If JIT compilers are the answer ... what is the problem? Recall that in dynamically typed languages for ( int i = 0; i < 1000000; i++ ) { for ( int j = 0; i < 1000000; j++ ) { a [i, j] = a[i,j] + 1 } } Is really for ( int i = 0; i < 1000000; i++ ) { for ( int j = 0; i < 1000000; j++ ) { let ta = typeof ( a[i, j] ) // always same let t1 = typeof ( 1 ) // always same if ( ta == Int && t1 == Int ) { let va = value ( a[i, j] ) let v1 = value ( 1 ) // simplifying let res = integer_addition ( va, v1 ) a[ i, j ]_result_part = res a[ i, j ] _type_part = Int } else { ... } } } 35 / 1

  8. If JIT compilers are the answer ... what is the problem? So program from last slide can be let ta = typeof ( a ) let t1 = typeof ( 1 ) if ( ta == Array [...] of Int && t1 == Int ) { for ( int i = 0; i < 1000000; i++ ) { for ( int j = 0; i < 1000000; j++ ) { let va = value ( a[i, j] ) let v1 = value ( 1 ) // simplifying let res = integer_addition ( va, v1 ) a[ i, j ]_result_part = res } } } else { ... } 36 / 1

  9. If JIT compilers are the answer ... what is the problem? So program from last slide can be let ta = typeof ( a ) let t1 = typeof ( 1 ) if ( ta == Array [...] of Int && t1 == Int ) { for ( int i = 0; i < 1000000; i++ ) { for ( int j = 0; i < 1000000; j++ ) { let va = value ( a[i, j] ) let v1 = value ( 1 ) // simplifying let res = integer_addition ( va, v1 ) a[ i, j ]_result_part = res } } } else { ... } Alas, at compile-time, the compiler does not have enough information to make this optimisation safely. 37 / 1

  10. If JIT compilers are the answer ... what is the problem? 38 / 1

  11. If JIT compilers are the answer ... what is the problem? Let’s summarise the situation. ◮ Certain powerful optimisations cannot be done at compile-time, because the compiler has not got enough information to know they are safe. ◮ At run-time we have enough information to carry out these optimisations. 39 / 1

  12. If JIT compilers are the answer ... what is the problem? Let’s summarise the situation. ◮ Certain powerful optimisations cannot be done at compile-time, because the compiler has not got enough information to know they are safe. ◮ At run-time we have enough information to carry out these optimisations. Hmmm, what could we do ... 40 / 1

  13. 41 / 1

  14. How about we compile and optimise only at run-time? 42 / 1

  15. How about we compile and optimise only at run-time? But there is no run-time if we don’t have a compilation process, right? 43 / 1

  16. How about we compile and optimise only at run-time? But there is no run-time if we don’t have a compilation process, right? Enter interpreters ! 44 / 1

  17. Interpreters 45 / 1

  18. Interpreters Recall from the beginning of the course, that interpreters are a second way to run programs. Data Source program Compiler Executable Output At runtime. Data Source program Interpreter Output 46 / 1

  19. Interpreters Recall from the beginning of the course, that interpreters are a second way to run programs. Data Source program Compiler Executable Output ◮ Compilers generate a program that has an effect on the world. At runtime. Data Source program Interpreter Output 47 / 1

  20. Interpreters Recall from the beginning of the course, that interpreters are a second way to run programs. Data Source program Compiler Executable Output ◮ Compilers generate a program that has an effect on the world. At runtime. ◮ Interpreters effect the world Data directly. Source program Interpreter Output 48 / 1

  21. Interpreters Recall from the beginning of the course, that interpreters are a second way to run programs. ◮ The advantage of compilers is that generated code is faster , Data because a lot of work has to be done only once (e.g. lexing, parsing, type-checking, Source program Compiler Executable Output optimisation). And the results of this work are shared in At runtime. every execution. The Data interpreter has to redo this work every time. Source program Interpreter Output ◮ The advantage of interpreters is that they are much simpler than compilers. 49 / 1

  22. JIT compiler, key idea 50 / 1

  23. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. 51 / 1

  24. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. 52 / 1

  25. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? 53 / 1

  26. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? ◮ How do interpreter and compiled program interact? 54 / 1

  27. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? ◮ How do interpreter and compiled program interact? ◮ But most of all: compilation is really slow , especially optimising compilation. Don’t we make performance worse if we slow an already slow interpreter down with a lengthy compilation process? 55 / 1

  28. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? ◮ How do interpreter and compiled program interact? ◮ But most of all: compilation is really slow , especially optimising compilation. Don’t we make performance worse if we slow an already slow interpreter down with a lengthy compilation process? In other words, we are facing the following conundrum: 56 / 1

  29. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? ◮ How do interpreter and compiled program interact? ◮ But most of all: compilation is really slow , especially optimising compilation. Don’t we make performance worse if we slow an already slow interpreter down with a lengthy compilation process? In other words, we are facing the following conundrum: ◮ We want to optimise as much as possible, because optimised programs run faster. 57 / 1

  30. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? ◮ How do interpreter and compiled program interact? ◮ But most of all: compilation is really slow , especially optimising compilation. Don’t we make performance worse if we slow an already slow interpreter down with a lengthy compilation process? In other words, we are facing the following conundrum: ◮ We want to optimise as much as possible, because optimised programs run faster. ◮ We want to optimises as little as possible, because running the optimisers is really slow. 58 / 1

  31. JIT compiler, key idea Interpret the program, and compile (parts of) the program at run-time. This suggests the following questions. ◮ When shall we compile, and which parts of the program? ◮ How do interpreter and compiled program interact? ◮ But most of all: compilation is really slow , especially optimising compilation. Don’t we make performance worse if we slow an already slow interpreter down with a lengthy compilation process? In other words, we are facing the following conundrum: ◮ We want to optimise as much as possible, because optimised programs run faster. ◮ We want to optimises as little as possible, because running the optimisers is really slow. Hmmmm ... 59 / 1

  32. Pareto principle and compiler/interpreter ∆ to our rescue Time Running Interpretation is much faster than (optimising) compilation. But a compiled program is Paid every time much faster than Compiling Paid once interpretation. And we have to compile only once. Running Interpreter Compiler 60 / 1

  33. Pareto principle and compiler/interpreter ∆ to our rescue Time Running Interpretation is much faster than (optimising) compilation. But a compiled program is Paid every time much faster than Compiling Paid once interpretation. And we have to compile only once. Running Interpreter Compiler Combine this with the Pareto principle, and you have a potent weapon at hand. 61 / 1

  34. Pareto principle, aka 80-20 rule 62 / 1

  35. Pareto principle, aka 80-20 rule Vilfredo Pareto, late 19th, early 20th century Italian economist. Noticed: ◮ 80% of land in Italy was owned by 20% of the population. ◮ 20% of the pea pods in his garden contained 80% of the peas. 63 / 1

  36. Pareto principle, aka 80-20 rule Vilfredo Pareto, late 19th, early 20th century Italian economist. Noticed: ◮ 80% of land in Italy was owned by 20% of the population. ◮ 20% of the pea pods in his garden contained 80% of the peas. This principle applies in many other areas of life, including program execution: 64 / 1

  37. Pareto principle, aka 80-20 rule Vilfredo Pareto, late 19th, early 20th century Italian economist. Noticed: ◮ 80% of land in Italy was owned by 20% of the population. ◮ 20% of the pea pods in his garden contained 80% of the peas. This principle applies in many other areas of life, including program execution: The great majority of a program’s execution time is spent running in a tiny fragment of the code. 65 / 1

  38. Pareto principle, aka 80-20 rule Vilfredo Pareto, late 19th, early 20th century Italian economist. Noticed: ◮ 80% of land in Italy was owned by 20% of the population. ◮ 20% of the pea pods in his garden contained 80% of the peas. This principle applies in many other areas of life, including program execution: The great majority of a program’s execution time is spent running in a tiny fragment of the code. Such code is referred to as hot . 66 / 1

  39. Putting the pieces together 67 / 1

  40. Putting the pieces together Clearly compiling at run-time code that’s executed infrequently will slow down execution. Trade-offs are different for hot code. Time Running Compiling Running Interpreter Compiler 68 / 1

  41. Putting the pieces together Clearly compiling at run-time code that’s executed infrequently will slow down execution. Trade-offs are different for hot code. Time An innermost loop may be executed billions of Running times. The more often, the more optimising compilation pays off. Compiling Running Interpreter Compiler 69 / 1

  42. Putting the pieces together Clearly compiling at run-time code that’s executed infrequently will slow down execution. Trade-offs are different for hot code. Time An innermost loop may be executed billions of Running times. The more often, the more optimising compilation pays off. Pareto’s principle tells us that (typically) a Compiling program contains some hot code. Running Interpreter Compiler 70 / 1

  43. Putting the pieces together Clearly compiling at run-time code that’s executed infrequently will slow down execution. Trade-offs are different for hot code. Time An innermost loop may be executed billions of Running times. The more often, the more optimising compilation pays off. Pareto’s principle tells us that (typically) a Compiling program contains some hot code. With the information available at run-time, we can Running aggressively optimise such hot code, and get a massive speed-up. The rest is interpreted. Interpreter Compiler Sluggishness of interpretation doesn’t matter, because it’s only a fraction of program execution time. 71 / 1

  44. There is just one problem ... how do we find hot code? 72 / 1

  45. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). 73 / 1

  46. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). 74 / 1

  47. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). Let’s use counters at run-time! 75 / 1

  48. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). Let’s use counters at run-time! We instrument the interpreter with counters, that increment every time a method is called, or every time we go round a loop. 76 / 1

  49. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). Let’s use counters at run-time! We instrument the interpreter with counters, that increment every time a method is called, or every time we go round a loop. Whenever these counters reach a threshold, we assume that the associated code is hot. We compile that hot code, and jump to the compiled code. 77 / 1

  50. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). Let’s use counters at run-time! We instrument the interpreter with counters, that increment every time a method is called, or every time we go round a loop. Whenever these counters reach a threshold, we assume that the associated code is hot. We compile that hot code, and jump to the compiled code. Making this play nice with garbage collection, exceptions, concurrency, debugging isn’t easy ... 78 / 1

  51. There is just one problem ... how do we find hot code? Remember, at compiler time, the optimiser couldn’t work it out (reliably). Let’s use counters at run-time! We instrument the interpreter with counters, that increment every time a method is called, or every time we go round a loop. Whenever these counters reach a threshold, we assume that the associated code is hot. We compile that hot code, and jump to the compiled code. Making this play nice with garbage collection, exceptions, concurrency, debugging isn’t easy ... When the compiled code terminates, we switch back to interpretation. 79 / 1

  52. In a picture 80 / 1

  53. In a picture No Source Increment Hot Interpret code counter code Yes Execute compiled Compile hot hot code to code and termination optimise 81 / 1

  54. Aside 82 / 1

  55. Aside Have you noticed that Java programs start up quite slowly? 83 / 1

  56. Aside Have you noticed that Java programs start up quite slowly? This is because at the beginning, everything is interpreted, hence slow. Then JIT compilation starts, also slow. 84 / 1

  57. Aside Have you noticed that Java programs start up quite slowly? This is because at the beginning, everything is interpreted, hence slow. Then JIT compilation starts, also slow. Eventually, the hot code is detected and compiled with a great deal of optimisation. Then execution gets really fast. 85 / 1

  58. The devil is in the details 86 / 1

  59. The devil is in the details This picture omits many subtleties. 87 / 1

  60. The devil is in the details This picture omits many subtleties. Chief among those is that the handover of control from interpreter to compiler and back works seamlessly. 88 / 1

  61. The devil is in the details This picture omits many subtleties. Chief among those is that the handover of control from interpreter to compiler and back works seamlessly. Also, we don’t want to recompile code, typically use cache of already compiled code. 89 / 1

  62. The devil is in the details This picture omits many subtleties. Chief among those is that the handover of control from interpreter to compiler and back works seamlessly. Also, we don’t want to recompile code, typically use cache of already compiled code. How actually to do the optimisations, taking information available at run-time into account. 90 / 1

  63. The devil is in the details This picture omits many subtleties. Chief among those is that the handover of control from interpreter to compiler and back works seamlessly. Also, we don’t want to recompile code, typically use cache of already compiled code. How actually to do the optimisations, taking information available at run-time into account. Etc etc. 91 / 1

  64. JIT compilers summary 92 / 1

  65. JIT compilers summary JIT compilers are the cutting edge of compiler technology. They were first conceived (in rudimentary form) in the 1960s, but came to life in the last 10 years or so. 93 / 1

  66. JIT compilers summary JIT compilers are the cutting edge of compiler technology. They were first conceived (in rudimentary form) in the 1960s, but came to life in the last 10 years or so. JIT compilers are very complicated. The JVM, probably the best known JIT compiler, probably took 1000+ person years to build. 94 / 1

  67. JIT compilers summary JIT compilers are the cutting edge of compiler technology. They were first conceived (in rudimentary form) in the 1960s, but came to life in the last 10 years or so. JIT compilers are very complicated. The JVM, probably the best known JIT compiler, probably took 1000+ person years to build. So what’s next in compiler technology? 95 / 1

  68. JIT compilers summary JIT compilers are the cutting edge of compiler technology. They were first conceived (in rudimentary form) in the 1960s, but came to life in the last 10 years or so. JIT compilers are very complicated. The JVM, probably the best known JIT compiler, probably took 1000+ person years to build. So what’s next in compiler technology? Let me introduce you to ... 96 / 1

  69. Tracing JIT compilers 97 / 1

  70. Tracing JIT compilers Tracing JIT compilers are a form of JIT compilation where optimisation is especially aggressive. 98 / 1

  71. Tracing JIT compilers 99 / 1

  72. Tracing JIT compilers Hot code can contain code that is not used (much). 100 / 1

Recommend


More recommend