boosting convergence of timing closure using feature
play

Boosting Convergence of Timing Closure using Feature Selection in a - PowerPoint PPT Presentation

Boosting Convergence of Timing Closure using Feature Selection in a Learning-driven Approach Que Yanghua, Harnhua Ng, Nachiket Kapre yanghua.que@ntu.edu.sg, nachiket@ieee.org Claim 2 Claim Feature Selection helps boost AUC scores for


  1. Boosting Convergence of Timing Closure using Feature Selection in a Learning-driven Approach Que Yanghua, Harnhua Ng, Nachiket Kapre yanghua.que@ntu.edu.sg, nachiket@ieee.org

  2. Claim 2

  3. Claim • Feature Selection helps boost AUC scores for Timing Closure ML models by ~10% 2

  4. Claim • Feature Selection helps boost AUC scores for Timing Closure ML models by ~10% • ML models predict timing closure of design by modifying CAD tool parameters — commercial tool InTime, by Plunify Inc. 2

  5. Claim • Feature Selection helps boost AUC scores for Timing Closure ML models by ~10% • ML models predict timing closure of design by modifying CAD tool parameters — commercial tool InTime, by Plunify Inc. • For Altera Quartus 
 — ~80 parameters to 8-22 influential parameters 2

  6. FPGA CAD Flow Verilog/VHDL FPGA CAD Tool Code Bitstream (area, delay, power) 3

  7. FPGA CAD Flow CAD parameters Verilog/VHDL FPGA CAD Tool Code Bitstream (area, delay, power) 4

  8. FPGA CAD Flow CAD parameters Verilog/VHDL FPGA CAD Tool Code Bitstream Frequency of occurence 6 ecg (area, delay, 5 4 power) 3 2 1 -6500 -6000 -5500 -5000 -4500 -4000 -3500 -3000 Total Negative Slack (TNS) 5

  9. InTime High-Level View 6

  10. InTime High-Level View 6

  11. InTime High-Level View • Position : 
 — Verified RTL designs expensive to edit 
 — For timing closure, use CAD parameters 6

  12. InTime High-Level View • Position : 
 — Verified RTL designs expensive to edit 
 — For timing closure, use CAD parameters • InTime 
 — free RTL, play with CAD tool parameters 
 — Problem : exhaustive search intractable 
 — Solution : use machine learning! 6

  13. InTime High-Level View [FPGA’15 Designer’s Day] Preliminary results on customer designs (limited ability to discuss specifics) 
 [FCCM’15 Full] Extended results quantifying ML effects on open-source benchmarks 
 [FPGA’16 Short] Case-for “design-specific” learning instead of building a generic model 
 [FCCM’16 Short] Classifier accuracy exploration across ML strategies, and hyper-parameter tuning 7

  14. Outline • Brief intro of InTime flow and ML techniques • Justifying the approach 
 — Opportunity for using ML (Slack distribution) 
 — The need for running ML (Entropy/Correlation) • Review of Feature Selection • Experimental results 
 — Impact of features/run samples 
 — ROC curves across designs 
 — Comparing vs. FCCM’16 results 8

  15. Outline • Brief intro of InTime flow and ML techniques • Justifying the approach 
 — Opportunity for using ML (Slack distribution) 
 — The need for running ML (Entropy/Correlation) • Review of Feature Selection • Experimental results 
 — Impact of features/run samples 
 — ROC curves across designs 
 — Comparing vs. FCCM’16 results 9

  16. InTime High-Level View • Position : 
 — Verified RTL designs expensive to edit 
 — For timing closure, use CAD parameters • InTime 
 — free RTL, play with CAD tool parameters 
 — Problem : exhaustive search intractable 
 — Solution : use machine learning! 10

  17. How InTime works • Simply tabulate results 
 — record input CAD parameters + timing slack • Build a model for predicting [GOOD/BAD] 11

  18. How InTime works 12

  19. Outline • Brief intro of InTime flow and ML techniques • Justifying the approach 
 — Opportunity for using ML (Slack distribution) 
 — The need for running ML (Entropy/Correlation) • Review of Feature Selection • Experimental results 
 — Impact of features/run samples 
 — ROC curves across designs 
 — Comparing vs. FCCM’16 results 13

  20. Q&A • Do this really work? • What’s the opportunity in timing slack spread? • Do we really need machine learning? • How unique are the final converged solutions? • What is the coverage scope of our tool? 14

  21. Do this really work? 15

  22. Results — No Learning 16

  23. Results — with Learning 17

  24. 18

  25. What’s the opportunity in timing slack spread? 19

  26. Parameter Exploration 20

  27. Do we really need machine learning? 21

  28. Results (aes) 22

  29. Results (aes) best classification 23

  30. How unique are the final converged solutions? 24

  31. Dissimilarity 25

  32. What is the coverage scope of our tool? 26

  33. Entropy in solutions 27

  34. So, what’s the bottomline? 28

  35. 29

  36. Outline • Brief intro of InTime flow and ML techniques • Justifying the approach 
 — Opportunity for using ML (Slack distribution) 
 — The need for running ML (Entropy/Correlation) • Review of Feature Selection • Experimental results 
 — Impact of features/run samples 
 — ROC curves across designs 
 — Comparing vs. FCCM’16 results 30

  37. Feature Selection 31

  38. Feature Selection 31

  39. Feature Selection • Hypothesis : Not all CAD parameters affect timing outcome 31

  40. Feature Selection • Hypothesis : Not all CAD parameters affect timing outcome • Can we find the most relevant parameters? 31

  41. Feature Selection • Hypothesis : Not all CAD parameters affect timing outcome • Can we find the most relevant parameters? • Feature selection: known technique in ML circles 
 — avoid noise during classification 
 — avoid over-fitting 31

  42. Feature Selection • Hypothesis : Not all CAD parameters affect timing outcome • Can we find the most relevant parameters? • Feature selection: known technique in ML circles 
 — avoid noise during classification 
 — avoid over-fitting 31

  43. Techniques • OneR — use frequency of class labels • Information.Gain — uses entropy measure • Relief — clustering of parameters • Ensemble — combination of above…

  44. Outline • Brief intro of InTime flow and ML techniques • Justifying the approach 
 — Opportunity for using ML (Slack distribution) 
 — The need for running ML (Entropy/Correlation) • Review of Feature Selection • Experimental results 
 — Impact of features/run samples 
 — ROC curves across designs 
 — Comparing vs. FCCM’16 results 33

  45. Q&A • How effective is feature selection? • How long does the learning process take? • What is the impact of choosing feature count? 34

  46. How effective is feature selection? 35

  47. 36

  48. Classifier method doesn’t matter 37

  49. Baseline FCCM 2016 result 38

  50. 39

  51. 2-3x reduction in parallel FPGA CAD runs 40

  52. Outlier — fails to meet timing and quits 41

  53. How long does it take to learn? 42

  54. 43

  55. Need atleast 20 runs 44

  56. Need 3 rounds x 30 runs configuration 45

  57. Better AUC the more we run 46

  58. How do we choose the correct subset of features 47

  59. 48

  60. Goldilocks zone 49

  61. Too many features — large training set 50

  62. Too few features — more data required for other features 51

  63. Conclusions • Feature Selection helps boost AUC of InTime machine learning by ~10% • Key idea — prune the set of Quartus CAD tool parameters to explore to <22 • Evidence continues to point towards design- specificity 52

  64. Open-source flow • We are open-sourcing our ML routines 
 — http://bitbucket.org/spinosae/plunify-ml.git 
 — README.md contains instructions for installing and running on your machine • Requires R (dependencies installed automatically) 53

  65. Impact of feature count 54

  66. 55

  67. Goldilocks zone 56

  68. 57

  69. Information.Gain consistently best 58

  70. 59

  71. Goldilocks zone 60

  72. 61

Recommend


More recommend