reluplex an efficient smt solver for
play

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks - PowerPoint PPT Presentation

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks David L Dill Stanford University 1 / 39 Based on: Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks Guy Katz, Clark Barrett, David Dill, Kyle


  1. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks David L Dill Stanford University 1 / 39

  2. Based on: “ Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks” Guy Katz, Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer Available on arXiv: 1702.01135 Accepted at CAV conference in July. 2 / 39

  3. Artificial Neural Networks  An emerging solution to a wide variety of problems  A “ black box ” solution ◦ An advantage, but also a drawback  Goal: Prove that some properties of NN are guaranteed 3 / 39

  4. Case Study: ACAS Xu  New standard being developed for collision avoidance advisories for unmanned aircraft  Produce advisories: 1. Strong left (SL) 2. Weak left (L) 3. Weak right (R) 4. Strong right (SR) 5. Clear of conflict (COC)  Implementation: an array of 45 neural networks ◦ Under consideration by the FAA 4 / 39

  5. Certifying ACAS Xu Networks  Neural networks g eneralize to previously-unseen inputs  Show that certain properties hold for all inputs  Examples: ◦ If intruder is distant, advisory is always COC ◦ If intruder is nearby on the left, advisory is always “strong right”  Crucial for increasing the level of confidence 5 / 39

  6. Reasoning About Neural Nets  Can we manually prove properties?  Networks are too big to reason about manually ◦ ACAS Xu networks: 8 layers, 310 nodes ◦ (times 45 ) 6 / 39

  7. Verifying Neural Nets  A possible answer: automatic verification  … but, existing verification tools don’t scale up!  A difficult (NP-complete) problem 7 / 39

  8. Activation Functions  The difficulty: Rectified Linear Units (ReLUs) 2 1 1 3 1 ⋅ 1 + −2 ⋅ 3 + 0 ⋅ −2 = −5 2 ⋅ 1 + 0 ⋅ 3 + (−1) ⋅ −2 = 4 −2 0 4 0 −2 −1 0  ReLU( 𝑦 ) = max(0, 𝑦) ◦ 𝑦 ≥ 0 : active case, return 𝑦 ◦ 𝑦 < 0: inactive case, return 0 8 / 39

  9. Activation Functions (cnt ’ d)  ReLUs require case-splitting: ◦ Fix active/inactive states of ReLUs ◦ Solve the resulting sub-problem (no activation functions)  Property holds iff it holds for all sub-problems  𝑜 ReLUs imply 2 𝑜 combinations 9 / 39

  10. Our Contributions  A technique for solving linear programs with ReLUs ◦ Can encode neural networks as input  Extends the simplex method ◦ Reluplex : ReLUs with simplex  Does not require case splitting in advance ◦ ReLU constraints satisfied incrementally ◦ Split only if we must  Scales to the ACAS Xu networks ◦ An order of magnitude larger than previously possible 10 / 39

  11. Agenda  Reluplex  Evaluation on the ACAS Xu Networks  Conclusion 11 / 39

  12. Agenda  Reluplex  Evaluation on the ACAS Xu Networks  Conclusion 12 / 39

  13. ReLUs As Variable Pairs ReLU 𝑥 𝑏 𝑦 2 𝑦 2 1 1 𝑦 1 𝑦 4 ReLU −1 𝑥 𝑏 1 𝑦 3 𝑦 3  Split ReLUs into: A weighted sum variable 𝑦 𝑥 1. An activation function variable 𝑦 𝑏 2.  Allow ReLU violations ( 𝑦 𝑏 ≠ 𝑦 𝑥 ),  Fix them gradually during simplex  Split cases ( 𝑦 𝑥 < 0 , 𝑦 𝑥 ≥ 0 ) as a last resort 13 / 39

  14. T ermination  Can we always find a solution using pivots and updates?  No: sometimes get into a loop  Have to split on ReLU variables  Use a Satisfiability Modulo Theories (SMT) framework to manage decision tree resulting from splitting ◦ Generic efficient framework for combining Boolean reasoning (i.e. splitting) with reasoning in a theory (in this case, linear arithmetic with ReLUs) 14 / 39

  15. The Reluplex Method  A method for solving linear programs with ReLUs  Sound and terminating  Efficient ◦ For efficient implementation, see paper 15 / 39

  16. Agenda  Reluplex  Evaluation on the ACAS Xu Networks  Conclusion 16 / 39

  17. Properties of Interest No unnecessary turning advisories 1. Alerting regions are consistent 2. Strong alerts do not appear when vertical separation 3. is large 17 / 39

  18. A Few Simple Properties  Satisfiable properties: 𝑧 ≥ 𝑑 for output node 𝑧 𝝌 𝟐 𝝌 𝟑 𝝌 𝟒 𝝌 𝟓 𝝌 𝟔 𝝌 𝟕 𝝌 𝟖 𝝌 𝟗 CVC4 - - - - - - - - Z3 - - - - - - - - Yices 1 37 - - - - - - MathSat 2040 9780 - - - - - - Gurobi 1 1 1 - - - - - Reluplex 11 3 9 10 155 7 10 14 18 / 39

  19. Example 1  “ If the intruder is near and approaching from the left, the network advises strong right ” ◦ Distance: 12000 ≤ 𝜍 ≤ 62000 ◦ Angle to intruder: 0.2 ≤ 𝜄 ≤ 0.4 ◦ Intruder ’ s heading angle: −𝜌 ≤ 𝜔 ≤ −𝜌 + 0.005 ◦ Ownship speed: 100 ≤ 𝑤 𝑝𝑥𝑜 ≤ 400 ◦ Intruder speed: 0 ≤ 𝑤 𝑗𝑜𝑢 ≤ 400 ◦ Previous advisory: COC ◦ Time to loss of vertical separation: τ = 0  Proof time: 01 : 29 : 29 (using 4 machines) 19 / 39

  20. Example 2  “ For a distant intruder, the network advises COC ” ◦ Distance: 36000 ≤ 𝜍 ≤ 60760 ◦ Angle to intruder: 0.7 ≤ 𝜄 ≤ 𝜌 ◦ Intruder ’ s heading angle: −𝜌 ≤ 𝜔 ≤ −𝜌 + 0.01 ◦ Ownship speed: 900 ≤ 𝑤 𝑝𝑥𝑜 ≤ 1200 ◦ Intruder speed: 600 ≤ 𝑤 𝑗𝑜𝑢 ≤ 1200 ◦ Previous advisory: strong right ◦ Time to loss of vertical separation: τ = 20  Proof time: 01 : 25 : 15 (using 4 machines) 20 / 39

  21. Example 3  “If vertical separation is large and the previous advisory is weak left, the network advises COC or weak left” ◦ Distance: 0 ≤ 𝜍 ≤ 60760 ◦ Angle to intruder: −𝜌 ≤ 𝜄 ≤ −0.75 ⋅ 𝜌 ◦ Intruder’s h eading angle: −0.1 ≤ 𝜔 ≤ 0.1 ◦ Ownship speed: 600 ≤ 𝑤 𝑝𝑥𝑜 ≤ 1200 ◦ Intruder speed: 600 ≤ 𝑤 𝑗𝑜𝑢 ≤ 1200 ◦ Previous advisory: weak left ◦ Time to loss of vertical separation: τ = 100  Time to find counter example: 11 : 08 : 22 (using 1 machine) ◦ Previously observed also in simulation 21 / 39

  22. Agenda  Reluplex  Evaluation on the ACAS Xu Networks  Conclusion 22 / 39

  23. Conclusion  Reluplex: a technique for solving linear programs with ReLUs  Can encode neural networks and properties as Reluplex inputs  Scalable  Sound and terminating ◦ Modulo floating point 23 / 39

  24. Next Steps  More complete verification of ACAS Xu  Improving performance ◦ Adopt more SMT techniques (e.g. conflict analysis) ◦ Adopt more Linear Programming techniques (e.g. sum of infeasibilities)  Soundness guarantees ◦ Replay floating-point computation using precise arithmetic ◦ Proof certificates  More expressiveness ◦ Additional kinds of properties ◦ Additional kinds of activation functions 24 / 39

  25. Thank Y ou! Questions? Available on arXiv: 1702.01135 25 / 39

Recommend


More recommend