towards machine learning for quantification
play

Towards Machine Learning for Quantification Mikol Janota AITP, 28 - PowerPoint PPT Presentation

Towards Machine Learning for Quantification Mikol Janota AITP, 28 March 2018 IST/INESC-ID, University of Lisbon, Portugal Janota Towards Machine Learning for Quantification 1 / 28 Outline Intro: QBF, Expansion, Games, Careful expansion


  1. • A QBF is true iff there exists a winning strategy for . Example u e u e -player wins by playing e u . Quantification and Two-player Games • In this talk we consider prenex form: Quantifier-prefix . Matrix Example ∀ u 1 u 2 ∃ e 1 e 2 . ( ¬ u 1 ∨ e 1 ) ∧ ( u 2 ∨ ¬ e 2 ) • A QBF represents a two-player game between ∀ and ∃ . • ∀ wins a game if the matrix becomes false. • ∃ wins a game if the matrix becomes true. • A QBF is false iff there exists a winning strategy for ∀ . Janota Towards Machine Learning for Quantification 5 / 28

  2. Example u e u e -player wins by playing e u . Quantification and Two-player Games • In this talk we consider prenex form: Quantifier-prefix . Matrix Example ∀ u 1 u 2 ∃ e 1 e 2 . ( ¬ u 1 ∨ e 1 ) ∧ ( u 2 ∨ ¬ e 2 ) • A QBF represents a two-player game between ∀ and ∃ . • ∀ wins a game if the matrix becomes false. • ∃ wins a game if the matrix becomes true. • A QBF is false iff there exists a winning strategy for ∀ . • A QBF is true iff there exists a winning strategy for ∃ . Janota Towards Machine Learning for Quantification 5 / 28

  3. Quantification and Two-player Games • In this talk we consider prenex form: Quantifier-prefix . Matrix Example ∀ u 1 u 2 ∃ e 1 e 2 . ( ¬ u 1 ∨ e 1 ) ∧ ( u 2 ∨ ¬ e 2 ) • A QBF represents a two-player game between ∀ and ∃ . • ∀ wins a game if the matrix becomes false. • ∃ wins a game if the matrix becomes true. • A QBF is false iff there exists a winning strategy for ∀ . • A QBF is true iff there exists a winning strategy for ∃ . Example ∀ u ∃ e . ( u ↔ e ) ∃ -player wins by playing e � u . Janota Towards Machine Learning for Quantification 5 / 28

  4. Solving QBF

  5. Can be solved by SAT . Impractical! 2 Observe: 2 for some 2 What is a good ? Solving by CEGAR Expansion ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Janota Towards Machine Learning for Quantification 6 / 28

  6. Observe: 2 for some 2 What is a good ? Solving by CEGAR Expansion ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] (∧ ) Can be solved by SAT . Impractical! µ ∈ 2 U φ [ µ ] Janota Towards Machine Learning for Quantification 6 / 28

  7. Solving by CEGAR Expansion ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] (∧ ) Can be solved by SAT . Impractical! µ ∈ 2 U φ [ µ ] Observe: ∃E . ∧ µ ∈ 2 U φ [ µ ] ⇒ ∃E . ∧ µ ∈ ω φ [ µ ] for some ω ⊆ 2 U What is a good ω ? Janota Towards Machine Learning for Quantification 6 / 28

  8. • SAT 0 assignment to 0 • SAT 1 assignment to 0 • SAT 2 assignment to 1 • SAT 2 assignment to 0 1 • After n iterations i i 1 n Solving by CEGAR Expansion Contd. ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Expand gradually instead: [Janota and Marques-Silva, 2011] • Pick τ 0 arbitrary assignment to E Janota Towards Machine Learning for Quantification 7 / 28

  9. • SAT 1 assignment to 0 • SAT 2 assignment to 1 • SAT 2 assignment to 0 1 • After n iterations i i 1 n Solving by CEGAR Expansion Contd. ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Expand gradually instead: [Janota and Marques-Silva, 2011] • Pick τ 0 arbitrary assignment to E • SAT ( ¬ φ [ τ 0 ]) = µ 0 assignment to U Janota Towards Machine Learning for Quantification 7 / 28

  10. • SAT 2 assignment to 1 • SAT 2 assignment to 0 1 • After n iterations i i 1 n Solving by CEGAR Expansion Contd. ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Expand gradually instead: [Janota and Marques-Silva, 2011] • Pick τ 0 arbitrary assignment to E • SAT ( ¬ φ [ τ 0 ]) = µ 0 assignment to U • SAT ( φ [ µ 0 ]) = τ 1 assignment to E Janota Towards Machine Learning for Quantification 7 / 28

  11. • SAT 2 assignment to 0 1 • After n iterations i i 1 n Solving by CEGAR Expansion Contd. ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Expand gradually instead: [Janota and Marques-Silva, 2011] • Pick τ 0 arbitrary assignment to E • SAT ( ¬ φ [ τ 0 ]) = µ 0 assignment to U • SAT ( φ [ µ 0 ]) = τ 1 assignment to E • SAT ( ¬ φ [ τ 1 ]) = µ 2 assignment to U Janota Towards Machine Learning for Quantification 7 / 28

  12. • After n iterations i i 1 n Solving by CEGAR Expansion Contd. ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Expand gradually instead: [Janota and Marques-Silva, 2011] • Pick τ 0 arbitrary assignment to E • SAT ( ¬ φ [ τ 0 ]) = µ 0 assignment to U • SAT ( φ [ µ 0 ]) = τ 1 assignment to E • SAT ( ¬ φ [ τ 1 ]) = µ 2 assignment to U • SAT ( φ [ µ 0 ] ∧ φ [ µ 1 ]) = τ 2 assignment to E Janota Towards Machine Learning for Quantification 7 / 28

  13. Solving by CEGAR Expansion Contd. ∃E ∀U . φ ≡ ∃E . ∧ µ ∈ 2 U φ [ µ ] Expand gradually instead: [Janota and Marques-Silva, 2011] • Pick τ 0 arbitrary assignment to E • SAT ( ¬ φ [ τ 0 ]) = µ 0 assignment to U • SAT ( φ [ µ 0 ]) = τ 1 assignment to E • SAT ( ¬ φ [ τ 1 ]) = µ 2 assignment to U • SAT ( φ [ µ 0 ] ∧ φ [ µ 1 ]) = τ 2 assignment to E • After n iterations ∃E . ∧ i ∈ 1 .. n φ [ τ i ] Janota Towards Machine Learning for Quantification 7 / 28

  14. Abstraction-Based Algorithm for a Winning Move Algorithm for ∃∀ . Generalize to arbitrary number of alternations using recursion. [Janota et al., 2012]. 1 Function Solve ( ∃ X ∀ Y . φ ) 2 α ← true // start with an empty abstraction 3 while true do τ ← SAT ( α ) // find a candidate 4 if τ = ⊥ then return ⊥ 5 µ ← Solve ( ¬ φ [ X ← τ ]) // find a countermove 6 if µ = ⊥ then return τ 7 α ← α ∧ φ [ Y ← µ ] // refine abstraction 8 Janota Towards Machine Learning for Quantification 8 / 28

  15. Results, QBF-Gallery ’14, Application Track Janota Towards Machine Learning for Quantification 9 / 28

  16. x y x Setting candidate x 1 yields true (impossible to falsify). Stop. Careful Expansion: Good Example ∃ x . . . ∀ y . . . . φ ∧ y Setting countermove y ← 0 yields false. Stop. Janota Towards Machine Learning for Quantification 10 / 28

  17. Careful Expansion: Good Example ∃ x . . . ∀ y . . . . φ ∧ y Setting countermove y ← 0 yields false. Stop. ∃ x . . . ∀ y . . . . x ∨ φ Setting candidate x ← 1 yields true (impossible to falsify). Stop. Janota Towards Machine Learning for Quantification 10 / 28

  18. 2. SAT 1 y y 0 countermove 3. SAT x 0 x 0 candidate 4. SAT 0 y y 1 countermove 5. SAT x 0 x 1 UNSAT Stop Careful Expansion: Bad Example ∃ x ∀ y . x ⇔ y 1. x ← 1 candidate Janota Towards Machine Learning for Quantification 11 / 28

  19. 3. SAT x 0 x 0 candidate 4. SAT 0 y y 1 countermove 5. SAT x 0 x 1 UNSAT Stop Careful Expansion: Bad Example ∃ x ∀ y . x ⇔ y 1. x ← 1 candidate 2. SAT ( ¬ ( 1 ⇔ y )) . . . y ← 0 countermove Janota Towards Machine Learning for Quantification 11 / 28

  20. 4. SAT 0 y y 1 countermove 5. SAT x 0 x 1 UNSAT Stop Careful Expansion: Bad Example ∃ x ∀ y . x ⇔ y 1. x ← 1 candidate 2. SAT ( ¬ ( 1 ⇔ y )) . . . y ← 0 countermove 3. SAT ( x ⇔ 0 ) . . . x ← 0 candidate Janota Towards Machine Learning for Quantification 11 / 28

  21. 5. SAT x 0 x 1 UNSAT Stop Careful Expansion: Bad Example ∃ x ∀ y . x ⇔ y 1. x ← 1 candidate 2. SAT ( ¬ ( 1 ⇔ y )) . . . y ← 0 countermove 3. SAT ( x ⇔ 0 ) . . . x ← 0 candidate 4. SAT ( ¬ ( 0 ⇔ y )) . . . y ← 1 countermove Janota Towards Machine Learning for Quantification 11 / 28

  22. Careful Expansion: Bad Example ∃ x ∀ y . x ⇔ y 1. x ← 1 candidate 2. SAT ( ¬ ( 1 ⇔ y )) . . . y ← 0 countermove 3. SAT ( x ⇔ 0 ) . . . x ← 0 candidate 4. SAT ( ¬ ( 0 ⇔ y )) . . . y ← 1 countermove 5. SAT ( x ⇔ 0 ∧ x ⇔ 1 ) . . . UNSAT Stop Janota Towards Machine Learning for Quantification 11 / 28

  23. 2. SAT 0 y 1 0 y 2 y 1 1 y 2 1 3. SAT x 1 1 x 2 1 x 1 x 2 0 1 4. SAT 0 y 1 1 y 2 y 1 1 y 2 0 5. SAT x 1 1 x 2 1 x 1 1 x 2 0 6. Careful Expansion: Ugly Example ∃ x 1 x 2 ∀ y 1 y 2 . x 1 ⇔ y 1 ∨ x 2 ⇔ y 2 1. x 1 , x 2 ← 0 , 0 Janota Towards Machine Learning for Quantification 12 / 28

  24. 3. SAT x 1 1 x 2 1 x 1 x 2 0 1 4. SAT 0 y 1 1 y 2 y 1 1 y 2 0 5. SAT x 1 1 x 2 1 x 1 1 x 2 0 6. Careful Expansion: Ugly Example ∃ x 1 x 2 ∀ y 1 y 2 . x 1 ⇔ y 1 ∨ x 2 ⇔ y 2 1. x 1 , x 2 ← 0 , 0 2. SAT ( ¬ ( 0 ⇔ y 1 ∨ ¬ 0 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 1 Janota Towards Machine Learning for Quantification 12 / 28

  25. 4. SAT 0 y 1 1 y 2 y 1 1 y 2 0 5. SAT x 1 1 x 2 1 x 1 1 x 2 0 6. Careful Expansion: Ugly Example ∃ x 1 x 2 ∀ y 1 y 2 . x 1 ⇔ y 1 ∨ x 2 ⇔ y 2 1. x 1 , x 2 ← 0 , 0 2. SAT ( ¬ ( 0 ⇔ y 1 ∨ ¬ 0 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 1 3. SAT ( x 1 ⇔ 1 ∨ x 2 ⇔ 1 ) . . . x 1 , x 2 ← 0 , 1 Janota Towards Machine Learning for Quantification 12 / 28

  26. 5. SAT x 1 1 x 2 1 x 1 1 x 2 0 6. Careful Expansion: Ugly Example ∃ x 1 x 2 ∀ y 1 y 2 . x 1 ⇔ y 1 ∨ x 2 ⇔ y 2 1. x 1 , x 2 ← 0 , 0 2. SAT ( ¬ ( 0 ⇔ y 1 ∨ ¬ 0 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 1 3. SAT ( x 1 ⇔ 1 ∨ x 2 ⇔ 1 ) . . . x 1 , x 2 ← 0 , 1 4. SAT ( ¬ ( 0 ⇔ y 1 ∨ 1 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 0 Janota Towards Machine Learning for Quantification 12 / 28

  27. 6. Careful Expansion: Ugly Example ∃ x 1 x 2 ∀ y 1 y 2 . x 1 ⇔ y 1 ∨ x 2 ⇔ y 2 1. x 1 , x 2 ← 0 , 0 2. SAT ( ¬ ( 0 ⇔ y 1 ∨ ¬ 0 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 1 3. SAT ( x 1 ⇔ 1 ∨ x 2 ⇔ 1 ) . . . x 1 , x 2 ← 0 , 1 4. SAT ( ¬ ( 0 ⇔ y 1 ∨ 1 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 0 5. SAT ( x 1 ⇔ 1 ∨ x 2 ⇔ 1 ) ∧ ( x 1 ⇔ 1 ∨ x 2 ⇔ 0 ) ( ) . . . Janota Towards Machine Learning for Quantification 12 / 28

  28. Careful Expansion: Ugly Example ∃ x 1 x 2 ∀ y 1 y 2 . x 1 ⇔ y 1 ∨ x 2 ⇔ y 2 1. x 1 , x 2 ← 0 , 0 2. SAT ( ¬ ( 0 ⇔ y 1 ∨ ¬ 0 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 1 3. SAT ( x 1 ⇔ 1 ∨ x 2 ⇔ 1 ) . . . x 1 , x 2 ← 0 , 1 4. SAT ( ¬ ( 0 ⇔ y 1 ∨ 1 ⇔ y 2 )) . . . y 1 ← 1 , y 2 ← 0 5. SAT ( x 1 ⇔ 1 ∨ x 2 ⇔ 1 ) ∧ ( x 1 ⇔ 1 ∨ x 2 ⇔ 0 ) ( ) . . . 6. . . . Janota Towards Machine Learning for Quantification 12 / 28

  29. Learning in QBF

  30. • BUT: We know that the formula is immediately false if we set y i x i . x 1 x n y 1 y n x i x i x 1 x n 0 i 1 n • Idea: instead of plugging in constants, plug in functions. • Where do we get the functions? Issue • CEGAR requires 2 n SAT calls for the formula ∨ ∃ x 1 . . . x n ∀ y 1 . . . y n . x i ⇔ y i i ∈ 1 .. n Janota Towards Machine Learning for Quantification 13 / 28

  31. • Idea: instead of plugging in constants, plug in functions. • Where do we get the functions? Issue • CEGAR requires 2 n SAT calls for the formula ∨ ∃ x 1 . . . x n ∀ y 1 . . . y n . x i ⇔ y i i ∈ 1 .. n • BUT: We know that the formula is immediately false if we set y i ← ¬ x i . ( ) ( ) ∃ x 1 . . . x n ∀ y 1 . . . y n . ∨ x i ⇔ ¬ x i ∃ x 1 . . . x n . 0 ≡ i ∈ 1 .. n Janota Towards Machine Learning for Quantification 13 / 28

  32. • Where do we get the functions? Issue • CEGAR requires 2 n SAT calls for the formula ∨ ∃ x 1 . . . x n ∀ y 1 . . . y n . x i ⇔ y i i ∈ 1 .. n • BUT: We know that the formula is immediately false if we set y i ← ¬ x i . ( ) ( ) ∃ x 1 . . . x n ∀ y 1 . . . y n . ∨ x i ⇔ ¬ x i ∃ x 1 . . . x n . 0 ≡ i ∈ 1 .. n • Idea: instead of plugging in constants, plug in functions. Janota Towards Machine Learning for Quantification 13 / 28

  33. Issue • CEGAR requires 2 n SAT calls for the formula ∨ ∃ x 1 . . . x n ∀ y 1 . . . y n . x i ⇔ y i i ∈ 1 .. n • BUT: We know that the formula is immediately false if we set y i ← ¬ x i . ( ) ( ) ∃ x 1 . . . x n ∀ y 1 . . . y n . ∨ x i ⇔ ¬ x i ∃ x 1 . . . x n . 0 ≡ i ∈ 1 .. n • Idea: instead of plugging in constants, plug in functions. • Where do we get the functions? Janota Towards Machine Learning for Quantification 13 / 28

  34. 2. Run a machine learning algorithm to learn a Boolean function for each variable in the inner quantifier. 3. Strengthen abstraction with the functions. 4. Repeat. 5. Additional heuristic: If a learned function still works, keep it. “Don’t fix what ain’t broke.” Use Machine Learning [Janota, 2018] 1. Enumerate some number of candidate–countermove pairs. Janota Towards Machine Learning for Quantification 14 / 28

  35. 3. Strengthen abstraction with the functions. 4. Repeat. 5. Additional heuristic: If a learned function still works, keep it. “Don’t fix what ain’t broke.” Use Machine Learning [Janota, 2018] 1. Enumerate some number of candidate–countermove pairs. 2. Run a machine learning algorithm to learn a Boolean function for each variable in the inner quantifier. Janota Towards Machine Learning for Quantification 14 / 28

  36. 4. Repeat. 5. Additional heuristic: If a learned function still works, keep it. “Don’t fix what ain’t broke.” Use Machine Learning [Janota, 2018] 1. Enumerate some number of candidate–countermove pairs. 2. Run a machine learning algorithm to learn a Boolean function for each variable in the inner quantifier. 3. Strengthen abstraction with the functions. Janota Towards Machine Learning for Quantification 14 / 28

  37. 5. Additional heuristic: If a learned function still works, keep it. “Don’t fix what ain’t broke.” Use Machine Learning [Janota, 2018] 1. Enumerate some number of candidate–countermove pairs. 2. Run a machine learning algorithm to learn a Boolean function for each variable in the inner quantifier. 3. Strengthen abstraction with the functions. 4. Repeat. Janota Towards Machine Learning for Quantification 14 / 28

  38. Use Machine Learning [Janota, 2018] 1. Enumerate some number of candidate–countermove pairs. 2. Run a machine learning algorithm to learn a Boolean function for each variable in the inner quantifier. 3. Strengthen abstraction with the functions. 4. Repeat. 5. Additional heuristic: If a learned function still works, keep it. “Don’t fix what ain’t broke.” Janota Towards Machine Learning for Quantification 14 / 28

  39. Machine Learning Example x 1 x 2 x n y 1 y 2 y n . . . . . . 0 0 0 1 1 1 . . . . . . 1 0 0 0 1 1 . . . . . . 0 0 1 1 1 0 . . . . . . 0 1 1 1 0 0 . . . . . . Janota Towards Machine Learning for Quantification 15 / 28

  40. • SAT x 1 x 1 i 2 n x 2 1 • After 4 steps: y 1 x 1 y 2 x 2 • Eventually we learn the right functions. Machine Learning Example x 1 x 2 x n y 1 y 2 y n . . . . . . 0 0 0 1 1 1 . . . . . . 1 0 0 0 1 1 . . . . . . 0 0 1 1 1 0 . . . . . . 0 1 1 1 0 0 . . . . . . • After 2 steps: y 1 ← ¬ x 1 , y i ← 1 for i ∈ 2 .. n . Janota Towards Machine Learning for Quantification 15 / 28

  41. • After 4 steps: y 1 x 1 y 2 x 2 • Eventually we learn the right functions. Machine Learning Example x 1 x 2 x n y 1 y 2 y n . . . . . . 0 0 0 1 1 1 . . . . . . 1 0 0 0 1 1 . . . . . . 0 0 1 1 1 0 . . . . . . 0 1 1 1 0 0 . . . . . . • After 2 steps: y 1 ← ¬ x 1 , y i ← 1 for i ∈ 2 .. n . • SAT ( x 1 ⇔ ¬ x 1 ∨ ∨ i ∈ 2 .. n x 2 ⇔ 1 ) Janota Towards Machine Learning for Quantification 15 / 28

  42. • Eventually we learn the right functions. Machine Learning Example x 1 x 2 x n y 1 y 2 y n . . . . . . 0 0 0 1 1 1 . . . . . . 1 0 0 0 1 1 . . . . . . 0 0 1 1 1 0 . . . . . . 0 1 1 1 0 0 . . . . . . • After 2 steps: y 1 ← ¬ x 1 , y i ← 1 for i ∈ 2 .. n . • SAT ( x 1 ⇔ ¬ x 1 ∨ ∨ i ∈ 2 .. n x 2 ⇔ 1 ) • After 4 steps: y 1 ← ¬ x 1 y 2 ← ¬ x 2 . . . Janota Towards Machine Learning for Quantification 15 / 28

  43. Machine Learning Example x 1 x 2 x n y 1 y 2 y n . . . . . . 0 0 0 1 1 1 . . . . . . 1 0 0 0 1 1 . . . . . . 0 0 1 1 1 0 . . . . . . 0 1 1 1 0 0 . . . . . . • After 2 steps: y 1 ← ¬ x 1 , y i ← 1 for i ∈ 2 .. n . • SAT ( x 1 ⇔ ¬ x 1 ∨ ∨ i ∈ 2 .. n x 2 ⇔ 1 ) • After 4 steps: y 1 ← ¬ x 1 y 2 ← ¬ x 2 . . . • Eventually we learn the right functions. Janota Towards Machine Learning for Quantification 15 / 28

  44. • Recursion to generalize to multiple levels as before. • Refinement as before. • Every K refinements, learn new functions from last K samples. Refine with them. • Learning using decision trees by ID3 algorithm. Current Implementation • Use CEGAR as before. Janota Towards Machine Learning for Quantification 16 / 28

  45. • Refinement as before. • Every K refinements, learn new functions from last K samples. Refine with them. • Learning using decision trees by ID3 algorithm. Current Implementation • Use CEGAR as before. • Recursion to generalize to multiple levels as before. Janota Towards Machine Learning for Quantification 16 / 28

  46. • Every K refinements, learn new functions from last K samples. Refine with them. • Learning using decision trees by ID3 algorithm. Current Implementation • Use CEGAR as before. • Recursion to generalize to multiple levels as before. • Refinement as before. Janota Towards Machine Learning for Quantification 16 / 28

  47. • Learning using decision trees by ID3 algorithm. Current Implementation • Use CEGAR as before. • Recursion to generalize to multiple levels as before. • Refinement as before. • Every K refinements, learn new functions from last K samples. Refine with them. Janota Towards Machine Learning for Quantification 16 / 28

  48. Current Implementation • Use CEGAR as before. • Recursion to generalize to multiple levels as before. • Refinement as before. • Every K refinements, learn new functions from last K samples. Refine with them. • Learning using decision trees by ID3 algorithm. Janota Towards Machine Learning for Quantification 16 / 28

  49. Current Implementation: Experiments 800 qfun-64 qfun-128 700 rareqs qfun-64-f 600 quabs gq CPU time (s) 500 400 300 200 100 0 0 20 40 60 80 100 120 instances Janota Towards Machine Learning for Quantification 17 / 28

  50. Bernays–Schönfinkel (“Effectively Propositional Logic”) — Finite Models

  51. • uses predicates p 1 p m and constants c 1 c n . • Finite model property: formulas has a model iff it has a model of size n . • Therefore we can look for a model with the universe n , n n . 1 Bernays–Schönfinkel (EPR) ∀ X . φ • φ has no further quantifiers and no functions (just predicates and constants) Janota Towards Machine Learning for Quantification 18 / 28

  52. • Finite model property: formulas has a model iff it has a model of size n . • Therefore we can look for a model with the universe n , n n . 1 Bernays–Schönfinkel (EPR) ∀ X . φ • φ has no further quantifiers and no functions (just predicates and constants) • φ uses predicates p 1 , . . . , p m and constants c 1 , . . . , c n . Janota Towards Machine Learning for Quantification 18 / 28

  53. • Therefore we can look for a model with the universe n , n n . 1 Bernays–Schönfinkel (EPR) ∀ X . φ • φ has no further quantifiers and no functions (just predicates and constants) • φ uses predicates p 1 , . . . , p m and constants c 1 , . . . , c n . • Finite model property: formulas has a model iff it has a model of size ≤ n . Janota Towards Machine Learning for Quantification 18 / 28

  54. Bernays–Schönfinkel (EPR) ∀ X . φ • φ has no further quantifiers and no functions (just predicates and constants) • φ uses predicates p 1 , . . . , p m and constants c 1 , . . . , c n . • Finite model property: formulas has a model iff it has a model of size ≤ n . • Therefore we can look for a model with the universe ∗ 1 , . . . , ∗ n ′ , n ′ ≤ n . Janota Towards Machine Learning for Quantification 18 / 28

  55. 2. Find interpretation for : SAT 3. Test interpretation: SAT X 4. If no counterexample, formula is true. STOP. 5. Strengthen abstraction: X 6. GOTO 2 CEGAR for Finite Models ∃ p 1 . . . p m ∃ c 1 . . . c n ∀ X . φ p i predicates , c i constants , X variables 1. α ← true Janota Towards Machine Learning for Quantification 19 / 28

  56. 3. Test interpretation: SAT X 4. If no counterexample, formula is true. STOP. 5. Strengthen abstraction: X 6. GOTO 2 CEGAR for Finite Models ∃ p 1 . . . p m ∃ c 1 . . . c n ∀ X . φ p i predicates , c i constants , X variables 1. α ← true 2. Find interpretation for α : I ← SAT ( α ) Janota Towards Machine Learning for Quantification 19 / 28

  57. 4. If no counterexample, formula is true. STOP. 5. Strengthen abstraction: X 6. GOTO 2 CEGAR for Finite Models ∃ p 1 . . . p m ∃ c 1 . . . c n ∀ X . φ p i predicates , c i constants , X variables 1. α ← true 2. Find interpretation for α : I ← SAT ( α ) 3. Test interpretation: µ ← SAT ( ∃ X . ¬ φ [ I ]) Janota Towards Machine Learning for Quantification 19 / 28

  58. 5. Strengthen abstraction: X 6. GOTO 2 CEGAR for Finite Models ∃ p 1 . . . p m ∃ c 1 . . . c n ∀ X . φ p i predicates , c i constants , X variables 1. α ← true 2. Find interpretation for α : I ← SAT ( α ) 3. Test interpretation: µ ← SAT ( ∃ X . ¬ φ [ I ]) 4. If no counterexample, formula is true. STOP. Janota Towards Machine Learning for Quantification 19 / 28

  59. 6. GOTO 2 CEGAR for Finite Models ∃ p 1 . . . p m ∃ c 1 . . . c n ∀ X . φ p i predicates , c i constants , X variables 1. α ← true 2. Find interpretation for α : I ← SAT ( α ) 3. Test interpretation: µ ← SAT ( ∃ X . ¬ φ [ I ]) 4. If no counterexample, formula is true. STOP. 5. Strengthen abstraction: α ← α ∧ φ [ µ/ X ] Janota Towards Machine Learning for Quantification 19 / 28

  60. CEGAR for Finite Models ∃ p 1 . . . p m ∃ c 1 . . . c n ∀ X . φ p i predicates , c i constants , X variables 1. α ← true 2. Find interpretation for α : I ← SAT ( α ) 3. Test interpretation: µ ← SAT ( ∃ X . ¬ φ [ I ]) 4. If no counterexample, formula is true. STOP. 5. Strengthen abstraction: α ← α ∧ φ [ µ/ X ] 6. GOTO 2 Janota Towards Machine Learning for Quantification 19 / 28

  61. 2. Calculate interpretation by e.g. Ackermanization. 3. The interpretation only matters on the existing ground terms. 4. Learn entire interpretation from observing values of existing terms. Learning in Finite Models’ CEGAR 1. Consider some finite grounding: ∃ p 1 . . . p m ∃ c 1 . . . c n ∧ µ ∈ ω . φ [ µ ] p i predicates , c i constants , Janota Towards Machine Learning for Quantification 20 / 28

  62. 3. The interpretation only matters on the existing ground terms. 4. Learn entire interpretation from observing values of existing terms. Learning in Finite Models’ CEGAR 1. Consider some finite grounding: ∃ p 1 . . . p m ∃ c 1 . . . c n ∧ µ ∈ ω . φ [ µ ] p i predicates , c i constants , 2. Calculate interpretation by e.g. Ackermanization. Janota Towards Machine Learning for Quantification 20 / 28

  63. 4. Learn entire interpretation from observing values of existing terms. Learning in Finite Models’ CEGAR 1. Consider some finite grounding: ∃ p 1 . . . p m ∃ c 1 . . . c n ∧ µ ∈ ω . φ [ µ ] p i predicates , c i constants , 2. Calculate interpretation by e.g. Ackermanization. 3. The interpretation only matters on the existing ground terms. Janota Towards Machine Learning for Quantification 20 / 28

  64. Learning in Finite Models’ CEGAR 1. Consider some finite grounding: ∃ p 1 . . . p m ∃ c 1 . . . c n ∧ µ ∈ ω . φ [ µ ] p i predicates , c i constants , 2. Calculate interpretation by e.g. Ackermanization. 3. The interpretation only matters on the existing ground terms. 4. Learn entire interpretation from observing values of existing terms. Janota Towards Machine Learning for Quantification 20 / 28

  65. 2. Ground by X i and X 1 1 X 1 X n 0 : 0 0 3. p t p t 0 0 0 1 0 1 4. Partial interpretation: t 1 p False p True 0 0 1 0 5. Learn: t 1 , p X 1 X n X 1 1 , Learning in Finite Models’ CEGAR, Example 1. ∀ X . p ( X 1 , . . . , X n ) ⇔ ( X 1 = t ) Janota Towards Machine Learning for Quantification 21 / 28

  66. 3. p t p t 0 0 0 1 0 1 4. Partial interpretation: t 1 p False p True 0 0 1 0 5. Learn: t 1 , p X 1 X n X 1 1 , Learning in Finite Models’ CEGAR, Example 1. ∀ X . p ( X 1 , . . . , X n ) ⇔ ( X 1 = t ) 2. Ground by { X i � ∗ 0 } and { X 1 � ∗ 1 , X 1 � ∗ 0 . . . X n � ∗ 0 } : Janota Towards Machine Learning for Quantification 21 / 28

  67. 4. Partial interpretation: t 1 p False p True 0 0 1 0 5. Learn: t 1 , p X 1 X n X 1 1 , Learning in Finite Models’ CEGAR, Example 1. ∀ X . p ( X 1 , . . . , X n ) ⇔ ( X 1 = t ) 2. Ground by { X i � ∗ 0 } and { X 1 � ∗ 1 , X 1 � ∗ 0 . . . X n � ∗ 0 } : 3. ( p ( ∗ 0 , . . . , ∗ 0 ) ⇔ ∗ 0 = t ) ∧ ( p ( ∗ 1 , . . . , ∗ 0 ) ⇔ ∗ 1 = t ) Janota Towards Machine Learning for Quantification 21 / 28

  68. 5. Learn: t 1 , p X 1 X n X 1 1 , Learning in Finite Models’ CEGAR, Example 1. ∀ X . p ( X 1 , . . . , X n ) ⇔ ( X 1 = t ) 2. Ground by { X i � ∗ 0 } and { X 1 � ∗ 1 , X 1 � ∗ 0 . . . X n � ∗ 0 } : 3. ( p ( ∗ 0 , . . . , ∗ 0 ) ⇔ ∗ 0 = t ) ∧ ( p ( ∗ 1 , . . . , ∗ 0 ) ⇔ ∗ 1 = t ) 4. Partial interpretation: t � ∗ 1 , p ( ∗ 0 . . . , ∗ 0 ) � False , p ( ∗ 1 . . . , ∗ 0 ) � True Janota Towards Machine Learning for Quantification 21 / 28

  69. Learning in Finite Models’ CEGAR, Example 1. ∀ X . p ( X 1 , . . . , X n ) ⇔ ( X 1 = t ) 2. Ground by { X i � ∗ 0 } and { X 1 � ∗ 1 , X 1 � ∗ 0 . . . X n � ∗ 0 } : 3. ( p ( ∗ 0 , . . . , ∗ 0 ) ⇔ ∗ 0 = t ) ∧ ( p ( ∗ 1 , . . . , ∗ 0 ) ⇔ ∗ 1 = t ) 4. Partial interpretation: t � ∗ 1 , p ( ∗ 0 . . . , ∗ 0 ) � False , p ( ∗ 1 . . . , ∗ 0 ) � True 5. Learn: t � ∗ 1 , p ( X 1 , . . . , X n ) � ( X 1 = ∗ 1 ) , Janota Towards Machine Learning for Quantification 21 / 28

  70. Preliminary Results 600 iprover vam-fm cegar+learn 500 cegar expand cvc4 400 CPU time (s) 300 200 100 0 0 100 200 300 400 500 600 instances Janota Towards Machine Learning for Quantification 22 / 28

  71. Preliminary Results 600 cegar+learn cegar expand 500 400 CPU time (s) 300 200 100 0 0 50 100 150 200 250 300 350 400 instances Janota Towards Machine Learning for Quantification 23 / 28

  72. Preliminary Results (Hard) - more then 1 sec 600 cegar+learn cegar expand 500 400 CPU time (s) 300 200 100 0 0 10 20 30 40 50 60 70 80 instances Janota Towards Machine Learning for Quantification 24 / 28

  73. Learn vs. CEGAR, Iterations 100000 10000 cegar+learn (its) 1000 100 10 1 1 10 100 1000 10000 100000 cegar (its) Janota Towards Machine Learning for Quantification 25 / 28

  74. Learn vs. CEGAR, Iterations — Only True 10000 1000 cegar+learn (its) 100 10 1 1 10 100 1000 10000 cegar (its) Janota Towards Machine Learning for Quantification 26 / 28

  75. • Learning objects in the considered theory. (rather than strategies, etc.) • Learning from Booleans: n m n For , learning • Learning interpretations in finite models from partial interpretations: For D 1 D k F 1 F l , learning D 1 D k • How can we learn strategies based on functions? • Infinite domains? • Learning in the presence of theories? Summary and Future • Observing a formula while solving, learn from that. Janota Towards Machine Learning for Quantification 27 / 28

  76. • Learning from Booleans: n m n For , learning • Learning interpretations in finite models from partial interpretations: For D 1 D k F 1 F l , learning D 1 D k • How can we learn strategies based on functions? • Infinite domains? • Learning in the presence of theories? Summary and Future • Observing a formula while solving, learn from that. • Learning objects in the considered theory. (rather than strategies, etc.) Janota Towards Machine Learning for Quantification 27 / 28

  77. • Learning interpretations in finite models from partial interpretations: For D 1 D k F 1 F l , learning D 1 D k • How can we learn strategies based on functions? • Infinite domains? • Learning in the presence of theories? Summary and Future • Observing a formula while solving, learn from that. • Learning objects in the considered theory. (rather than strategies, etc.) • Learning from Booleans: For . . . ∃ B n ∀ B m . . . , learning B n → B Janota Towards Machine Learning for Quantification 27 / 28

  78. • How can we learn strategies based on functions? • Infinite domains? • Learning in the presence of theories? Summary and Future • Observing a formula while solving, learn from that. • Learning objects in the considered theory. (rather than strategies, etc.) • Learning from Booleans: For . . . ∃ B n ∀ B m . . . , learning B n → B • Learning interpretations in finite models from partial interpretations: For ∃ ( D 1 × · · · × D k �→ B ) ∀ F 1 × · · · × F l . . . , learning D 1 × · · · × D k → B Janota Towards Machine Learning for Quantification 27 / 28

Recommend


More recommend