nonlinear polynomials interpolants and invariant
play

Nonlinear Polynomials, Interpolants and Invariant Generation for - PowerPoint PPT Presentation

Nonlinear Polynomials, Interpolants and Invariant Generation for System Analysis Deepak Kapur Department of Computer Science University of New Mexico Albuquerque, NM, USA with Rodriguez-Carbonell, Zhihai Zhang, Hengjun Zhao, Stephan Falke,


  1. Two Approaches for Generating Loop Invariants Automatically 1. Ideal-Theoretic Methods ◮ properties of programs specified by a conjunction of polynomial equations. ◮ associated with every program location is an invariant (radical) ideal. ◮ Semantics of program constructs as ideal theoretic (algebraic varieties) operations – implemented using Gr¨ obner basis computations. ◮ Existence of a finite basis ensured by Hilbert’s basis condition. ◮ approximations and fixed point computation to generate such ideals.

  2. Two Approaches for Generating Loop Invariants Automatically 1. Ideal-Theoretic Methods ◮ properties of programs specified by a conjunction of polynomial equations. ◮ associated with every program location is an invariant (radical) ideal. ◮ Semantics of program constructs as ideal theoretic (algebraic varieties) operations – implemented using Gr¨ obner basis computations. ◮ Existence of a finite basis ensured by Hilbert’s basis condition. ◮ approximations and fixed point computation to generate such ideals.

  3. Two Approaches for Generating Loop Invariants Automatically 1. Ideal-Theoretic Methods ◮ properties of programs specified by a conjunction of polynomial equations. ◮ associated with every program location is an invariant (radical) ideal. ◮ Semantics of program constructs as ideal theoretic (algebraic varieties) operations – implemented using Gr¨ obner basis computations. ◮ Existence of a finite basis ensured by Hilbert’s basis condition. ◮ approximations and fixed point computation to generate such ideals. Papers with Enric Rodr´ ıguez-Carbonell in ISSAC (2004), SAS (2004), ICTAC (2004), Science of Programming (2007), Journal of Symbolic Computation (2007)

  4. 2. Quantifier-Elimination of Program Variables from Parameterized Formulas

  5. 2. Quantifier-Elimination of Program Variables from Parameterized Formulas Papers in ACA-2004, Journal of Systems Sciences and Complexity-2006

  6. 2. Quantifier-Elimination of Program Variables from Parameterized Formulas Papers in ACA-2004, Journal of Systems Sciences and Complexity-2006 Geometric and Local Heuristics for Quantifier Elimination for Automatically Generating Octagonal Invariants Papers in TAMC (2012), McCuneMemorial (2013).

  7. 2. Quantifier-Elimination of Program Variables from Parameterized Formulas Papers in ACA-2004, Journal of Systems Sciences and Complexity-2006 Geometric and Local Heuristics for Quantifier Elimination for Automatically Generating Octagonal Invariants Papers in TAMC (2012), McCuneMemorial (2013). Interplay of Computational Logic and Algebra

  8. Generating Loop Invariant: Approach ◮ Guess/fix the shape of invariants of interest at various program locations with some parameters which need to be determined.

  9. Generating Loop Invariant: Approach ◮ Guess/fix the shape of invariants of interest at various program locations with some parameters which need to be determined. ◮ Here is an illustration of generation of nonlinear invariants. A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + I : H y + J z + K = 0 .

  10. Generating Loop Invariant: Approach ◮ Guess/fix the shape of invariants of interest at various program locations with some parameters which need to be determined. ◮ Here is an illustration of generation of nonlinear invariants. A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + I : H y + J z + K = 0 . ◮ Generate verification conditions using the hypothesized invariants from the code.

  11. Generating Loop Invariant: Approach ◮ Guess/fix the shape of invariants of interest at various program locations with some parameters which need to be determined. ◮ Here is an illustration of generation of nonlinear invariants. A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + I : H y + J z + K = 0 . ◮ Generate verification conditions using the hypothesized invariants from the code. ◮ VC1: At first possible entry of the loop (from initialization): A + B + D + G + H + K = 0 .

  12. Generating Loop Invariant: Approach ◮ Guess/fix the shape of invariants of interest at various program locations with some parameters which need to be determined. ◮ Here is an illustration of generation of nonlinear invariants. A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + I : H y + J z + K = 0 . ◮ Generate verification conditions using the hypothesized invariants from the code. ◮ VC1: At first possible entry of the loop (from initialization): A + B + D + G + H + K = 0 . ◮ VC2: For every iteration of the loop body: ( I ( x , y , z ) ∧ x ≤ N ) = ⇒ I ( x + y + 2 , y + 2 , z + 1) .

  13. Generating Loop Invariant: Approach ◮ Guess/fix the shape of invariants of interest at various program locations with some parameters which need to be determined. ◮ Here is an illustration of generation of nonlinear invariants. A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + I : H y + J z + K = 0 . ◮ Generate verification conditions using the hypothesized invariants from the code. ◮ VC1: At first possible entry of the loop (from initialization): A + B + D + G + H + K = 0 . ◮ VC2: For every iteration of the loop body: ( I ( x , y , z ) ∧ x ≤ N ) = ⇒ I ( x + y + 2 , y + 2 , z + 1) . ◮ Using quantifier elimination, find constraints on parameters A , B , C , D , E , F , G , H , J , K which ensure that the verification conditions are valid for all possible program variables.

  14. Quantifier Elimination from Verification Conditions Considering VC2: ◮ ( A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + H y + J z + K = 0) = ⇒ ( A ( x + y +2) 2 + B ( y +2) 2 + C ( z +1) 2 + D ( x + y +2)( y +2)+ E ( x + y + 2)( z +1)+ F ( y +2)( z +1)+ G ( x + y +2)+ H ( y +2)+ J ( z +1)+ K = 0)

  15. Quantifier Elimination from Verification Conditions Considering VC2: ◮ ( A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + H y + J z + K = 0) = ⇒ ( A ( x + y +2) 2 + B ( y +2) 2 + C ( z +1) 2 + D ( x + y +2)( y +2)+ E ( x + y + 2)( z +1)+ F ( y +2)( z +1)+ G ( x + y +2)+ H ( y +2)+ J ( z +1)+ K = 0) ◮ Expanding the conclusion gives: Ax 2 + ( A + B + D ) y 2 + cz 2 + ( D + 2 A ) xy + Exz + ( E + F ) yz + ( G + 4 A + 2 D + E ) x + ( H + 4 A + 4 B + 4 D + E + F + G ) y + ( J + 2 C + 2 E + 2 F ) z + (4 A + 4 B + C + 4 D + 2 E + 2 F + 2 G + 2 H + J + K ) = 0

  16. Quantifier Elimination from Verification Conditions Considering VC2: ◮ ( A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + H y + J z + K = 0) = ⇒ ( A ( x + y +2) 2 + B ( y +2) 2 + C ( z +1) 2 + D ( x + y +2)( y +2)+ E ( x + y + 2)( z +1)+ F ( y +2)( z +1)+ G ( x + y +2)+ H ( y +2)+ J ( z +1)+ K = 0) ◮ Expanding the conclusion gives: Ax 2 + ( A + B + D ) y 2 + cz 2 + ( D + 2 A ) xy + Exz + ( E + F ) yz + ( G + 4 A + 2 D + E ) x + ( H + 4 A + 4 B + 4 D + E + F + G ) y + ( J + 2 C + 2 E + 2 F ) z + (4 A + 4 B + C + 4 D + 2 E + 2 F + 2 G + 2 H + J + K ) = 0 ◮ Simplifying using the hypothesis gives: ( A + D ) y 2 + 2 Axy + Eyz + (4 A + 2 D + E ) x + (4 A + 4 B + 4 D + E + F + G ) y +(2 C +2 E +2 F ) z +(4 A +4 B + C +4 D +2 E +2 F +2 G +2 H + J ) = 0

  17. Quantifier Elimination from Verification Conditions Considering VC2: ◮ ( A x 2 + B y 2 + C z 2 + D xy + E xz + F yz + G x + H y + J z + K = 0) = ⇒ ( A ( x + y +2) 2 + B ( y +2) 2 + C ( z +1) 2 + D ( x + y +2)( y +2)+ E ( x + y + 2)( z +1)+ F ( y +2)( z +1)+ G ( x + y +2)+ H ( y +2)+ J ( z +1)+ K = 0) ◮ Expanding the conclusion gives: Ax 2 + ( A + B + D ) y 2 + cz 2 + ( D + 2 A ) xy + Exz + ( E + F ) yz + ( G + 4 A + 2 D + E ) x + ( H + 4 A + 4 B + 4 D + E + F + G ) y + ( J + 2 C + 2 E + 2 F ) z + (4 A + 4 B + C + 4 D + 2 E + 2 F + 2 G + 2 H + J + K ) = 0 ◮ Simplifying using the hypothesis gives: ( A + D ) y 2 + 2 Axy + Eyz + (4 A + 2 D + E ) x + (4 A + 4 B + 4 D + E + F + G ) y +(2 C +2 E +2 F ) z +(4 A +4 B + C +4 D +2 E +2 F +2 G +2 H + J ) = 0 ◮ Since this should be 0 for all values of x , y , z : we have: A + D = 0; A = 0; E = 0 which implies D = 0; using these gives: 2 C + 2 F = 0 which implies C = − F ; using all these: G = − 4 B − F , H = − G − K − B and J = − 2 B − F + 2 K .

  18. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K .

  19. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K . ◮ Every value of parameters satisfying the above constraints leads to an invariant (including the trivial invariant true when all parameter values are 0).

  20. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K . ◮ Every value of parameters satisfying the above constraints leads to an invariant (including the trivial invariant true when all parameter values are 0). ◮ 7 parameters and 4 equations, so 3 independent parameters, say B , F , K . Making every independent parameter 1 separately with other independent parameters being 0, derive values of dependent parameters.

  21. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K . ◮ Every value of parameters satisfying the above constraints leads to an invariant (including the trivial invariant true when all parameter values are 0). ◮ 7 parameters and 4 equations, so 3 independent parameters, say B , F , K . Making every independent parameter 1 separately with other independent parameters being 0, derive values of dependent parameters. ◮ K = 1 , H = − 1 , J = 2 gives − y + 2 z + 1 = 0.

  22. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K . ◮ Every value of parameters satisfying the above constraints leads to an invariant (including the trivial invariant true when all parameter values are 0). ◮ 7 parameters and 4 equations, so 3 independent parameters, say B , F , K . Making every independent parameter 1 separately with other independent parameters being 0, derive values of dependent parameters. ◮ K = 1 , H = − 1 , J = 2 gives − y + 2 z + 1 = 0. ◮ F = 1 , C = − 1 , J = − 1 , G = − 1 , H = 1 gives − z 2 + yz − x + y − z − 0.

  23. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K . ◮ Every value of parameters satisfying the above constraints leads to an invariant (including the trivial invariant true when all parameter values are 0). ◮ 7 parameters and 4 equations, so 3 independent parameters, say B , F , K . Making every independent parameter 1 separately with other independent parameters being 0, derive values of dependent parameters. ◮ K = 1 , H = − 1 , J = 2 gives − y + 2 z + 1 = 0. ◮ F = 1 , C = − 1 , J = − 1 , G = − 1 , H = 1 gives − z 2 + yz − x + y − z − 0. ◮ B = 1 , J = − 2 , G = − 4 , H = 3 gives y 2 − 4 x + 3 y − 2 z = 0.

  24. Generating the Strongest Invariant ◮ Constraints on parameters are: C = − F , J = − 2 B − F + 2 K , G = − 4 B − F , H = 3 B + F − K . ◮ Every value of parameters satisfying the above constraints leads to an invariant (including the trivial invariant true when all parameter values are 0). ◮ 7 parameters and 4 equations, so 3 independent parameters, say B , F , K . Making every independent parameter 1 separately with other independent parameters being 0, derive values of dependent parameters. ◮ K = 1 , H = − 1 , J = 2 gives − y + 2 z + 1 = 0. ◮ F = 1 , C = − 1 , J = − 1 , G = − 1 , H = 1 gives − z 2 + yz − x + y − z − 0. ◮ B = 1 , J = − 2 , G = − 4 , H = 3 gives y 2 − 4 x + 3 y − 2 z = 0. ◮ The most general invariant describing all invariants of the above form is a conjunction of: z 2 − yz + z + x − y = 0 y 2 − 2 z − 4 x + 3 y = 0 , y = 2 z + 1; from which x = ( z + 1) 2 follows.

  25. Method for Automatically Generating Invariants by Quantifier Elimination ◮ Hypothesize assertions, which are parametrized formulas, at various points in a program.

  26. Method for Automatically Generating Invariants by Quantifier Elimination ◮ Hypothesize assertions, which are parametrized formulas, at various points in a program. ◮ Typically entry of every loop and entry and exit of every procedure suffice.

  27. Method for Automatically Generating Invariants by Quantifier Elimination ◮ Hypothesize assertions, which are parametrized formulas, at various points in a program. ◮ Typically entry of every loop and entry and exit of every procedure suffice. ◮ Nested loops and procedure/function calls can be handled.

  28. Method for Automatically Generating Invariants by Quantifier Elimination ◮ Hypothesize assertions, which are parametrized formulas, at various points in a program. ◮ Typically entry of every loop and entry and exit of every procedure suffice. ◮ Nested loops and procedure/function calls can be handled. ◮ Generate verification conditions for every path in the program (a path from an assertion to another assertion including itself).

  29. Method for Automatically Generating Invariants by Quantifier Elimination ◮ Hypothesize assertions, which are parametrized formulas, at various points in a program. ◮ Typically entry of every loop and entry and exit of every procedure suffice. ◮ Nested loops and procedure/function calls can be handled. ◮ Generate verification conditions for every path in the program (a path from an assertion to another assertion including itself). ◮ Depending upon the logical language chosen to write invariants, approximations of assignments and test conditions may be necessary.

  30. Method for Automatically Generating Invariants by Quantifier Elimination ◮ Hypothesize assertions, which are parametrized formulas, at various points in a program. ◮ Typically entry of every loop and entry and exit of every procedure suffice. ◮ Nested loops and procedure/function calls can be handled. ◮ Generate verification conditions for every path in the program (a path from an assertion to another assertion including itself). ◮ Depending upon the logical language chosen to write invariants, approximations of assignments and test conditions may be necessary. ◮ Find a formula expressed in terms of parameters eliminating all program variables (using quantifier elimination).

  31. Quality of Invariants Soundness and Completeness ◮ Every assignment of parameter values which make the formula true, gives an inductive invariant.

  32. Quality of Invariants Soundness and Completeness ◮ Every assignment of parameter values which make the formula true, gives an inductive invariant. ◮ If no parameter values can be found, then invariants of hypothesized forms may not exist. Invariants can be guaranteed not to exist if no approximations are made, while generating verification conditions.

  33. Quality of Invariants Soundness and Completeness ◮ Every assignment of parameter values which make the formula true, gives an inductive invariant. ◮ If no parameter values can be found, then invariants of hypothesized forms may not exist. Invariants can be guaranteed not to exist if no approximations are made, while generating verification conditions. ◮ If all assignments making the formula true can be finitely described, invariants generated may be the strongest of the hypothesized form. Invariants generated are guaranteed to be the strongest if no approximations are made, while generating verification conditions.

  34. How to Scale this Approach ◮ Quantifier Elimination Methods typically do not scale up due to high complexity even in this restricted case of ∃∀ .

  35. How to Scale this Approach ◮ Quantifier Elimination Methods typically do not scale up due to high complexity even in this restricted case of ∃∀ . ◮ Even for Presburger arithmetic, complexity is doubly exponential in the number of quantifier alternations and triply exponential in the number of quantified variables

  36. How to Scale this Approach ◮ Quantifier Elimination Methods typically do not scale up due to high complexity even in this restricted case of ∃∀ . ◮ Even for Presburger arithmetic, complexity is doubly exponential in the number of quantifier alternations and triply exponential in the number of quantified variables ◮ Output is huge and difficult to decipher.

  37. How to Scale this Approach ◮ Quantifier Elimination Methods typically do not scale up due to high complexity even in this restricted case of ∃∀ . ◮ Even for Presburger arithmetic, complexity is doubly exponential in the number of quantifier alternations and triply exponential in the number of quantified variables ◮ Output is huge and difficult to decipher. ◮ In practice, they often do not work (i.e., run out of memory or hang).

  38. How to Scale this Approach ◮ Quantifier Elimination Methods typically do not scale up due to high complexity even in this restricted case of ∃∀ . ◮ Even for Presburger arithmetic, complexity is doubly exponential in the number of quantifier alternations and triply exponential in the number of quantified variables ◮ Output is huge and difficult to decipher. ◮ In practice, they often do not work (i.e., run out of memory or hang). ◮ Linear constraint solving on rationals and reals (polyhedral domain), while of polynomial complexity, has been found in practice to be inefficient and slow, especially when used repeatedly as in abstract interpretation approach [Min´ e]

  39. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure.

  40. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure. ◮ Develop QE heuristics which exploit local structure of formulas (e.g. two variables at a time) and geometry of state space defined by formulas.

  41. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure. ◮ Develop QE heuristics which exploit local structure of formulas (e.g. two variables at a time) and geometry of state space defined by formulas. ◮ Among many possibilities in a result after QE, identify those most likely to be useful.

  42. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure. ◮ Develop QE heuristics which exploit local structure of formulas (e.g. two variables at a time) and geometry of state space defined by formulas. ◮ Among many possibilities in a result after QE, identify those most likely to be useful. ◮ Octagonal formulas : l ≤ ± x ± y ≤ h , a highly restricted subset of linear constraints (at most two variables with coefficients from {− 1 , 0 , 1 } ).

  43. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure. ◮ Develop QE heuristics which exploit local structure of formulas (e.g. two variables at a time) and geometry of state space defined by formulas. ◮ Among many possibilities in a result after QE, identify those most likely to be useful. ◮ Octagonal formulas : l ≤ ± x ± y ≤ h , a highly restricted subset of linear constraints (at most two variables with coefficients from {− 1 , 0 , 1 } ). ◮ This fragment is the most expressive fragment of linear arithmetic over the integers with a polynomial time decision procedure.

  44. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure. ◮ Develop QE heuristics which exploit local structure of formulas (e.g. two variables at a time) and geometry of state space defined by formulas. ◮ Among many possibilities in a result after QE, identify those most likely to be useful. ◮ Octagonal formulas : l ≤ ± x ± y ≤ h , a highly restricted subset of linear constraints (at most two variables with coefficients from {− 1 , 0 , 1 } ). ◮ This fragment is the most expressive fragment of linear arithmetic over the integers with a polynomial time decision procedure. ◮ Max, Min formulas : max ( ± x − l , ± y − h ), expressing disjunction (( x − l ≥ y − h ∧ x − l ≥ 0) ∨ ( y − h ≥ x − h ∧ y − h ≥ 0)).

  45. Making QE based Method Practical ◮ Identify (atomic) formulas and program abstractions resulting in verification conditions with good shape and structure. ◮ Develop QE heuristics which exploit local structure of formulas (e.g. two variables at a time) and geometry of state space defined by formulas. ◮ Among many possibilities in a result after QE, identify those most likely to be useful. ◮ Octagonal formulas : l ≤ ± x ± y ≤ h , a highly restricted subset of linear constraints (at most two variables with coefficients from {− 1 , 0 , 1 } ). ◮ This fragment is the most expressive fragment of linear arithmetic over the integers with a polynomial time decision procedure. ◮ Max, Min formulas : max ( ± x − l , ± y − h ), expressing disjunction (( x − l ≥ y − h ∧ x − l ≥ 0) ∨ ( y − h ≥ x − h ∧ y − h ≥ 0)). ◮ Combination of Octagonal and Max formulas.

  46. Octagonal Formulas ◮ Octagonal formulas over two variables have a fixed shape. Its parameterization can be given using 8 parameters.

  47. Octagonal Formulas ◮ Octagonal formulas over two variables have a fixed shape. Its parameterization can be given using 8 parameters. ◮ Given n variables, the most general formula (after simplification) is of the following form � i , j ( Octa i , j : a i , j ≤ x i − x j ≤ b i , j , c i , j ≤ x i + x j ≤ d i , j e i ≤ x i ≤ f i g j ≤ x j ≤ h j ) for every pair of variables x i , x j , where a i , j , b i , j , c i , j , d i , j , e i , f i , g j , h j are parameters.

  48. Octagonal Formulas ◮ Octagonal formulas over two variables have a fixed shape. Its parameterization can be given using 8 parameters. ◮ Given n variables, the most general formula (after simplification) is of the following form � i , j ( Octa i , j : a i , j ≤ x i − x j ≤ b i , j , c i , j ≤ x i + x j ≤ d i , j e i ≤ x i ≤ f i g j ≤ x j ≤ h j ) for every pair of variables x i , x j , where a i , j , b i , j , c i , j , d i , j , e i , f i , g j , h j are parameters. ◮ Class of programs that can be analyzed are very restricted. Still using octagonal constraints (and other heuristics), ASTREE is able to successfully analyze hundreds of thousands of lines of code of numerical software for array bound check, memory faults, and related bugs.

  49. Octagonal Formulas ◮ Octagonal formulas over two variables have a fixed shape. Its parameterization can be given using 8 parameters. ◮ Given n variables, the most general formula (after simplification) is of the following form � i , j ( Octa i , j : a i , j ≤ x i − x j ≤ b i , j , c i , j ≤ x i + x j ≤ d i , j e i ≤ x i ≤ f i g j ≤ x j ≤ h j ) for every pair of variables x i , x j , where a i , j , b i , j , c i , j , d i , j , e i , f i , g j , h j are parameters. ◮ Class of programs that can be analyzed are very restricted. Still using octagonal constraints (and other heuristics), ASTREE is able to successfully analyze hundreds of thousands of lines of code of numerical software for array bound check, memory faults, and related bugs. ◮ Algorithms used in ASTREE are of O ( n 3 ) complexity (sometimes, O ( n 4 )), where n is the number of variables (Min´ e, 2003).

  50. Octagonal Formulas ◮ Octagonal formulas over two variables have a fixed shape. Its parameterization can be given using 8 parameters. ◮ Given n variables, the most general formula (after simplification) is of the following form � i , j ( Octa i , j : a i , j ≤ x i − x j ≤ b i , j , c i , j ≤ x i + x j ≤ d i , j e i ≤ x i ≤ f i g j ≤ x j ≤ h j ) for every pair of variables x i , x j , where a i , j , b i , j , c i , j , d i , j , e i , f i , g j , h j are parameters. ◮ Class of programs that can be analyzed are very restricted. Still using octagonal constraints (and other heuristics), ASTREE is able to successfully analyze hundreds of thousands of lines of code of numerical software for array bound check, memory faults, and related bugs. ◮ Algorithms used in ASTREE are of O ( n 3 ) complexity (sometimes, O ( n 4 )), where n is the number of variables (Min´ e, 2003). ◮ Goal: Performance of QE heuristic should be at least as good.

  51. A Simple Example Example x := 4; y := 6; while ( x + y > = 0) do i f ( y > = 6) then { x := − x ; y := y − 1 } e l s e { x := x − 1; y := − y } endwhile

  52. A Simple Example Example x := 4; y := 6; while ( x + y > = 0) do i f ( y > = 6) then { x := − x ; y := y − 1 } e l s e { x := x − 1; y := − y } endwhile VC0: I (4 , 6) VC1: ( I ( x , y ) ∧ ( x + y ) ≥ 0 ∧ y ≥ 6) = ⇒ I ( − x , y − 1). VC2: ( I ( x , y ) ∧ ( x + y ) ≥ 0 ∧ y < 6) = ⇒ I ( x − 1 , − y ).

  53. Approach: Local QE Heuristics ◮ A program path is a sequence of assignment statements interspersed with tests. Its behavior may have to be approximated to generate the post condition in which both the hypothesis and the conclusion are each conjunctions of atomic octagonal formulas.

  54. Approach: Local QE Heuristics ◮ A program path is a sequence of assignment statements interspersed with tests. Its behavior may have to be approximated to generate the post condition in which both the hypothesis and the conclusion are each conjunctions of atomic octagonal formulas. ◮ A verification condition is expressed using atomic formulas that are all octagonal constraints. � ( Octa i , j ∧ α ( x i , x j )) ⇒ Octa ′ � � , i , j i , j along with additional parameter-free constraints α ( x i , x j ), of the same form in which lower and upper bounds are constants.

  55. Approach: Local QE Heuristics ◮ A program path is a sequence of assignment statements interspersed with tests. Its behavior may have to be approximated to generate the post condition in which both the hypothesis and the conclusion are each conjunctions of atomic octagonal formulas. ◮ A verification condition is expressed using atomic formulas that are all octagonal constraints. � ( Octa i , j ∧ α ( x i , x j )) ⇒ Octa ′ � � , i , j i , j along with additional parameter-free constraints α ( x i , x j ), of the same form in which lower and upper bounds are constants. ◮ Analysis of a big conjunctive constraint on every possible pair of variables can be considered individually by considering the subformula on each distinct pair.

  56. Geometric QE Heuristic ◮ Analyze how a general octagon gets transformed due to assignments. For each assignment case, a table is built showing the effect on the parameter values.

  57. Geometric QE Heuristic ◮ Analyze how a general octagon gets transformed due to assignments. For each assignment case, a table is built showing the effect on the parameter values. ◮ Identify conditions under which the transformed octagon includes the portion of the original octagon satisfying tests along a program path. This is guided again locally for every side of the octagon,

  58. Geometric QE Heuristic ◮ Analyze how a general octagon gets transformed due to assignments. For each assignment case, a table is built showing the effect on the parameter values. ◮ Identify conditions under which the transformed octagon includes the portion of the original octagon satisfying tests along a program path. This is guided again locally for every side of the octagon, ◮ In the case of many possibilities, the one likely to generate the most useful invariant is identified.

  59. Geometric QE Heuristic ◮ Analyze how a general octagon gets transformed due to assignments. For each assignment case, a table is built showing the effect on the parameter values. ◮ Identify conditions under which the transformed octagon includes the portion of the original octagon satisfying tests along a program path. This is guided again locally for every side of the octagon, ◮ In the case of many possibilities, the one likely to generate the most useful invariant is identified. ◮ Quantifier elimination heuristics to generate constraints on lower and upper bounds by table look ups in O ( n 2 ) steps, where n is the number of program variables.

  60. Table 3: Sign of exactly one variable is changed x := − x + A y := y + B ∆ 1 = A − B , ∆ 2 = A + B . ∆ 2 − u 2 ≤ x − y x − y ≤ a x + y ≤ u 2 x − y ≤ ∆ 2 − l 2 l 2 ≤ x + y

  61. Table 3: Sign of exactly one variable is changed x := − x + A y := y + B ∆ 1 = A − B , ∆ 2 = A + B . ∆ 2 − u 2 ≤ x − y side constraint present absent x − y ≤ a condition x − y ≤ a a ≤ ∆ 2 − l 2 u 1 ≤ ∆ 2 − l 2 – ∆ 2 − u 2 ≤ b ∆ 2 − u 2 ≤ l 1 – x − y ≥ b x + y ≤ u 2 x + y ≤ c c ≤ ∆ 1 − l 1 u 2 ≤ ∆ 1 − l 1 – x + y ≥ d ∆ 1 − u 1 ≤ d ∆ 1 − u 1 ≤ l 2 – x ≤ e e ≤ A − l 3 u 3 ≤ A − l 3 – x ≥ f A − u 3 ≤ f A − u 3 ≤ l 3 – x − y ≤ ∆ 2 − l 2 l 2 ≤ x + y y ≤ g u 4 ≥ g + B u 4 = + ∞ B > 0 y ≥ h l 4 ≤ h + B l 4 = −∞ B < 0

  62. A Simple Example Example x := 4; y := 6; while ( x + y > = 0) do i f ( y > = 6) then { x := − x ; y := y − 1 } e l s e { x := x − 1; y := − y } endwhile

  63. A Simple Example Example x := 4; y := 6; while ( x + y > = 0) do i f ( y > = 6) then { x := − x ; y := y − 1 } e l s e { x := x − 1; y := − y } endwhile VC0: I (4 , 6) VC1: ( I ( x , y ) ∧ ( x + y ) ≥ 0 ∧ y ≥ 6) = ⇒ I ( − x , y − 1). VC2: ( I ( x , y ) ∧ ( x + y ) ≥ 0 ∧ y < 6) = ⇒ I ( x − 1 , − y ).

  64. Generating Constraints on Parameters ◮ VC0: l 1 ≤ − 2 ≤ u 1 ∧ l 2 ≤ 10 ≤ u 2 ∧ l 3 ≤ 4 ≤ u 3 ∧ l 4 ≤ 6 ≤ u 4 .

  65. Generating Constraints on Parameters ◮ VC0: l 1 ≤ − 2 ≤ u 1 ∧ l 2 ≤ 10 ≤ u 2 ∧ l 3 ≤ 4 ≤ u 3 ∧ l 4 ≤ 6 ≤ u 4 . ◮ VC1: x − y : − u 2 − 1 ≤ l 1 ∧ u 1 ≤ − l 2 − 1. x + y : − u 1 + 1 ≤ 0 ∧ u 2 ≤ − l 1 + 1 . x : l 3 + u 3 = 0 . y : l 4 ≤ 5 .

  66. Generating Constraints on Parameters ◮ VC0: l 1 ≤ − 2 ≤ u 1 ∧ l 2 ≤ 10 ≤ u 2 ∧ l 3 ≤ 4 ≤ u 3 ∧ l 4 ≤ 6 ≤ u 4 . ◮ VC1: x − y : − u 2 − 1 ≤ l 1 ∧ u 1 ≤ − l 2 − 1. x + y : − u 1 + 1 ≤ 0 ∧ u 2 ≤ − l 1 + 1 . x : l 3 + u 3 = 0 . y : l 4 ≤ 5 . ◮ VC2: x − y : − u 2 − 1 ≤ − u 1 ∧ 10 ≤ − l 2 − 1 . x + y : l 1 + 1 ≤ 0 ∧ u 2 ≤ u 1 + 1 . x : l 3 ≤ − 6 . y : − u 4 ≤ l 4 ∧ 5 ≤ − l 4 .

  67. Generating Constraints on Parameters ◮ VC0: l 1 ≤ − 2 ≤ u 1 ∧ l 2 ≤ 10 ≤ u 2 ∧ l 3 ≤ 4 ≤ u 3 ∧ l 4 ≤ 6 ≤ u 4 . ◮ VC1: x − y : − u 2 − 1 ≤ l 1 ∧ u 1 ≤ − l 2 − 1. x + y : − u 1 + 1 ≤ 0 ∧ u 2 ≤ − l 1 + 1 . x : l 3 + u 3 = 0 . y : l 4 ≤ 5 . ◮ VC2: x − y : − u 2 − 1 ≤ − u 1 ∧ 10 ≤ − l 2 − 1 . x + y : l 1 + 1 ≤ 0 ∧ u 2 ≤ u 1 + 1 . x : l 3 ≤ − 6 . y : − u 4 ≤ l 4 ∧ 5 ≤ − l 4 . ◮ Make l i ’s as large as possible and u i ’s as small as possible: l 1 = − 10 , u 1 = 9 , l 2 = − 11 , u 2 = 10 , l 3 = − 6 , u 3 = 6 , l 4 = − 5 , u 4 = 6 .

  68. Generating Constraints on Parameters ◮ VC0: l 1 ≤ − 2 ≤ u 1 ∧ l 2 ≤ 10 ≤ u 2 ∧ l 3 ≤ 4 ≤ u 3 ∧ l 4 ≤ 6 ≤ u 4 . ◮ VC1: x − y : − u 2 − 1 ≤ l 1 ∧ u 1 ≤ − l 2 − 1. x + y : − u 1 + 1 ≤ 0 ∧ u 2 ≤ − l 1 + 1 . x : l 3 + u 3 = 0 . y : l 4 ≤ 5 . ◮ VC2: x − y : − u 2 − 1 ≤ − u 1 ∧ 10 ≤ − l 2 − 1 . x + y : l 1 + 1 ≤ 0 ∧ u 2 ≤ u 1 + 1 . x : l 3 ≤ − 6 . y : − u 4 ≤ l 4 ∧ 5 ≤ − l 4 . ◮ Make l i ’s as large as possible and u i ’s as small as possible: l 1 = − 10 , u 1 = 9 , l 2 = − 11 , u 2 = 10 , l 3 = − 6 , u 3 = 6 , l 4 = − 5 , u 4 = 6 . ◮ The corresponding invariant is: − 10 ≤ x − y ≤ 9 ∧ − 11 ≤ x + y ≤ 10 ∧ − 6 ≤ x ≤ 6 ∧ − 5 ≤ y ≤ 6 .

  69. Generating Invariants using Table Look-ups ◮ Parameter constraints corresponding to a specific program path are read from the corresponding entries in tables.

  70. Generating Invariants using Table Look-ups ◮ Parameter constraints corresponding to a specific program path are read from the corresponding entries in tables. ◮ Accumulate all such constraints on parameter values. They are also octagonal .

  71. Generating Invariants using Table Look-ups ◮ Parameter constraints corresponding to a specific program path are read from the corresponding entries in tables. ◮ Accumulate all such constraints on parameter values. They are also octagonal . ◮ Every parameter value that satisfies the parameter constraints leads to an invariant.

  72. Generating Invariants using Table Look-ups ◮ Parameter constraints corresponding to a specific program path are read from the corresponding entries in tables. ◮ Accumulate all such constraints on parameter values. They are also octagonal . ◮ Every parameter value that satisfies the parameter constraints leads to an invariant. ◮ Maximum values of lower bounds and minimal values of upper bounds satisfying the parameter constraints gives the strongest invariants. Maximum and minimum values can be computed using Floyd-Warshall’s algorithm.

  73. Complexity and Parallelization ◮ Overall Complexity: O ( k ∗ n 2 ):

  74. Complexity and Parallelization ◮ Overall Complexity: O ( k ∗ n 2 ): ◮ For every pair of program variables, parametric constraint generation is constant time: 8 constraints, so 8 entries.

  75. Complexity and Parallelization ◮ Overall Complexity: O ( k ∗ n 2 ): ◮ For every pair of program variables, parametric constraint generation is constant time: 8 constraints, so 8 entries. ◮ Parametric constraints are decomposed based on parameters appearing in them: there are O ( n 2 ) such constraints on disjoint blocks of parameters of size ≤ 4.

  76. Complexity and Parallelization ◮ Overall Complexity: O ( k ∗ n 2 ): ◮ For every pair of program variables, parametric constraint generation is constant time: 8 constraints, so 8 entries. ◮ Parametric constraints are decomposed based on parameters appearing in them: there are O ( n 2 ) such constraints on disjoint blocks of parameters of size ≤ 4. ◮ Program paths can be analyzed in parallel. Parametric constraints can be processed in parallel.

  77. Max Formulas Pictorial representation of all possible cases of max ( ± x + l , ± y + h ). Observe that every defined region is nonconvex. max ( x − l 8 , − y + u 8 ) ≥ 0 max ( − x + u 5 , − y + u 6 ) ≥ 0 (top left corner) (top right corner) max ( x − l 5 , y − l 6 ) ≥ 0 max ( − x + u 7 , y − l 7 ) ≥ 0 (bottom left corner) (bottom right corner)

Recommend


More recommend