program analysis with local policy iteration
play

Program Analysis with Local Policy Iteration George Karpenkov - PowerPoint PPT Presentation

Program Analysis with Local Policy Iteration George Karpenkov VERIMAG May 6, 2015 George Karpenkov Program Analysis with Local Policy Iteration 1/41 1 / 41 Outline Algorithm 2/41 Program Analysis with Local Policy Iteration George


  1. Program Analysis with Local Policy Iteration George Karpenkov VERIMAG May 6, 2015 George Karpenkov Program Analysis with Local Policy Iteration 1/41 1 / 41

  2. Outline Algorithm 2/41 Program Analysis with Local Policy Iteration George Karpenkov Results Contribution Example Motivation Introduction LPI Path Focusing Policy Iteration Algorithm Template Constraints Domain Background Finding Inductive Invariants Motivation 2 / 41

  3. Motivation Finding inductive invariants LPI Scalable algorithm for policy iteration Sent to FMCAD’15 George Karpenkov Program Analysis with Local Policy Iteration 3/41 • Program verification 3 / 41

  4. Motivation LPI Scalable algorithm for policy iteration Sent to FMCAD’15 George Karpenkov Program Analysis with Local Policy Iteration 3/41 • Program verification • Finding inductive invariants 3 / 41

  5. George Karpenkov Motivation Program Analysis with Local Policy Iteration 3/41 • Program verification • Finding inductive invariants • LPI ◦ Scalable algorithm for policy iteration ◦ Sent to FMCAD’15 3 / 41

  6. Program Modeling int i=0; while (i<10) { i++; } A George Karpenkov Program Analysis with Local Policy Iteration 4/41 • Control Flow Automaton (CFA) i ′ = 0 i < 10 ∧ i ′ = i + 1 4 / 41

  7. Inductive Invariant Motivation Inductive Invariant George Karpenkov Program Analysis with Local Policy Iteration 5/41 • Task: verify program properties τ • Prove: by induction • Aim: find inductive invariant I ◦ Includes initial state ◦ Closed under transition 5 / 41

  8. Abstract Interpretation Limitations enforce convergence George Karpenkov Program Analysis with Local Policy Iteration 6/41 • Usual tool: abstract interpretation • Relies on widenings/narrowings to • Can be very brittle 6 / 41

  9. Policy Iteration Historical Perspective Used for poker AI George Karpenkov Program Analysis with Local Policy Iteration 7/41 • Game-theoretique technique • Solving markov processes 7 / 41

  10. Policy Iteration Historical Perspective George Karpenkov Program Analysis with Local Policy Iteration 7/41 • Game-theoretique technique • Solving markov processes • Used for poker AI 7 / 41

  11. Policy Iteration Introduction - 2 Guarantees Least inductive invariant, not least invariant in general! George Karpenkov Program Analysis with Local Policy Iteration 8/41 • Finds least inductive invariant in the given abstract domain • Considers the program as a set of equations • Game-theoretic algorithm adapted to find inductive invariant • Requires abstract semantics to be monotone & concave 8 / 41

  12. Policy Iteration Introduction - 2 Guarantees Least inductive invariant, not least invariant in general! George Karpenkov Program Analysis with Local Policy Iteration 8/41 • Finds least inductive invariant in the given abstract domain • Considers the program as a set of equations • Game-theoretic algorithm adapted to find inductive invariant • Requires abstract semantics to be monotone & concave 8 / 41

  13. Outline Algorithm 9/41 Program Analysis with Local Policy Iteration George Karpenkov Results Contribution Example Motivation Introduction LPI Path Focusing Policy Iteration Algorithm Template Constraints Domain Background Finding Inductive Invariants Motivation 9 / 41

  14. Template Constraints Domain Domain used in our work George Karpenkov Program Analysis with Local Policy Iteration 10/41 • Choose linear inequalities to be tracked before the analysis • E.g. x, y, x + y (templates) • We want to find inductive invariant x ≤ d 1 ∧ y ≤ d 2 ∧ x + y ≤ d 3 for all control states • An element of the domain above is a vector (3 , 2 , 4) which corresponds to x ≤ 3 ∧ y ≤ 2 ∧ x + y ≤ 4 y x 10 / 41

  15. Template Constraints Domain Abstract Semantics George Karpenkov Program Analysis with Local Policy Iteration 11/41 • Abstract Semantics: transition relation in the abstract domain • Convex optimization: ◦ Template x , transition x ′ = x + 1 , previous element x ≤ 5 ◦ New element given by max x ′ s. t. x ′ = x + 1 ∧ x ≤ 5 11 / 41

  16. Policy Iteration Disjunctions come from multiple edges 12/41 Program Analysis with Local Policy Iteration George Karpenkov (unreachable) (unbounded) or We take supremum as the answer can be represents unreachable state s.t. Simple Example Necessary and sufficient condition: inductive invariant A i ′ = 0 • Template constraints domain { i } • Aim: find smallest d , s.t. i ≤ d is an i < 1000000 ∧ i ′ = i + 1 • Use semantical equations for d 12 / 41

  17. Policy Iteration George Karpenkov (unbounded) or We take supremum as the answer can be represents unreachable state Program Analysis with Local Policy Iteration 12/41 Simple Example (unreachable) inductive invariant A i ′ = 0 • Template constraints domain { i } • Aim: find smallest d , s.t. i ≤ d is an i < 1000000 ∧ i ′ = i + 1 • Use semantical equations for d • Necessary and sufficient condition: • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ ◦ Disjunctions come from multiple edges 12 / 41

  18. Policy Iteration George Karpenkov A inductive invariant Simple Example 12/41 Program Analysis with Local Policy Iteration i ′ = 0 • Template constraints domain { i } • Aim: find smallest d , s.t. i ≤ d is an i < 1000000 ∧ i ′ = i + 1 • Use semantical equations for d • Necessary and sufficient condition: • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ ◦ Disjunctions come from multiple edges ◦ ⊥ represents unreachable state ◦ We take supremum as the answer can be ∞ (unbounded) or −∞ (unreachable) 12 / 41

  19. Policy Iteration Explanation By Example We consider separate cases for disjunctions Replacing each disjunction with one argument s.t. Referred to as a policy George Karpenkov Program Analysis with Local Policy Iteration 13/41 • We have a min-max equation: d = min (sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = � ⊥ ) 0 13 / 41

  20. Policy Iteration Explanation By Example George Karpenkov Program Analysis with Local Policy Iteration 13/41 • We have a min-max equation: d = min (sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = � ⊥ ) 0 • We consider separate cases for disjunctions • Replacing each disjunction with one argument ◦ d = sup i ′ s.t. i ′ = 0 ◦ Referred to as a policy 13 / 41

  21. Policy Iteration Explanation By Example - 2 George Karpenkov Program Analysis with Local Policy Iteration 14/41 • d = sup i ′ s.t. i ′ = 0 • Simplified system (with no disjunctions): ◦ Monotone and concave ◦ ≤ 2 fixpoints ◦ Can be solved using LP 14 / 41

  22. Policy Iteration Example 4. Substituting, does not hold: 15/41 Program Analysis with Local Policy Iteration George Karpenkov s.t. 6. Substitute, holds! s.t. using 5. Increase to s.t. s.t. Algorithm Run using policy 3. Increase the value to s.t. 2. Substitute the value, does not hold: evaluates to s.t. 1. Equation • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 15 / 41

  23. Policy Iteration Example s.t. 15/41 Program Analysis with Local Policy Iteration George Karpenkov s.t. 6. Substitute, holds! s.t. using 5. Increase to 4. Substituting, does not hold: Algorithm Run s.t. using policy 3. Increase the value to s.t. 2. Substitute the value, does not hold: • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 1. Equation d = sup i ′ s.t. ⊥ evaluates to d = −∞ 15 / 41

  24. Policy Iteration Example s.t. 15/41 Program Analysis with Local Policy Iteration George Karpenkov s.t. 6. Substitute, holds! s.t. using 5. Increase to 4. Substituting, does not hold: Algorithm Run s.t. using policy 3. Increase the value to 2. Substitute the value, does not hold: • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 1. Equation d = sup i ′ s.t. ⊥ evaluates to d = −∞ −∞ = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 15 / 41

  25. Policy Iteration Example 5. Increase to 15/41 Program Analysis with Local Policy Iteration George Karpenkov s.t. 6. Substitute, holds! s.t. using s.t. Algorithm Run 4. Substituting, does not hold: 2. Substitute the value, does not hold: • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 1. Equation d = sup i ′ s.t. ⊥ evaluates to d = −∞ −∞ = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 3. Increase the value to 0 using policy d = sup i ′ s.t. i ′ = 0 15 / 41

  26. Policy Iteration Example Algorithm Run George Karpenkov s.t. 6. Substitute, holds! s.t. using 5. Increase to 4. Substituting, does not hold: 15/41 2. Substitute the value, does not hold: Program Analysis with Local Policy Iteration • d = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 1. Equation d = sup i ′ s.t. ⊥ evaluates to d = −∞ −∞ = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 ∧ i ≤ d � i ′ = 0 � ⊥ 3. Increase the value to 0 using policy d = sup i ′ s.t. i ′ = 0 0 = sup i ′ s.t. i ′ = i + 1 ∧ i < 1000000 � i ′ = 0 � ⊥ 15 / 41

Recommend


More recommend