A posteriori soundness for nondeterministic abstract interpretations Matthew Might (University of Utah) Panagiotis Manolios (Northeastern University)
Questions you don’t want at your defense
Questions you don’t want at your defense • “But, why did you prove it that way?”
Questions you don’t want at your defense • “But, why did you prove it that way?” • “But, why is that necessary ?”
Questions you don’t want at your defense • “But, why did you prove it that way?” • “But, why is that necessary ?” • “So, why did the Cousots do it that way?”
Nondeterministic Abstract Interpretation • Where did it come from? • How do you prove it sound? • Why would you want to use it?
Nondeterministic Abstract Interpretation • Where did it come from? • Frustration with the standard recipe. • How do you prove it sound? • Why would you want to use it?
Nondeterministic Abstract Interpretation • Where did it come from? • Frustration with the standard recipe. • How do you prove it sound? • A posteriori proof technique. • Why would you want to use it?
Nondeterministic Abstract Interpretation • Where did it come from? • Frustration with the standard recipe. • How do you prove it sound? • A posteriori proof technique. • Why would you want to use it? • Better speed, better precision.
Outline • Review standard recipe. • Find annoyances. • Get rid of them.
The Standard Recipe Define concrete state-space: L Define concrete semantics: f : L → L Define abstract state-space: ˆ L Define abstraction map: α : L → ˆ L Define abstract semantics: ˆ f : ˆ L → ˆ L ˆ Prove simulates under . f f α
The A Posteriori Recipe Define concrete state-space: L Define concrete semantics: f : L → L Define abstract state-space: ˆ L Define abstract semantics: ˆ f : ˆ L → ˆ L ℓ ′ = ˆ ˆ f (ˆ Execute abstract semantics to obtain . ℓ ) Define abstraction map: α : L → ˆ L ˆ Prove simulates under . f f α
The A Posteriori Recipe Define concrete state-space: L Define concrete semantics: f : L → L Define abstract state-space: ˆ L Define abstract semantics: ˆ f : ˆ L → ˆ L ℓ ′ = ˆ ˆ f (ˆ Execute abstract semantics to obtain . ℓ ) Define abstraction map: α : L → ˆ L ˆ Prove simulates under . ℓ ′ f α
The A Posteriori Recipe Define concrete state-space: L Define concrete semantics: f : L → L Define abstract state-space: ˆ L ˆ f : ˆ ˆ Define abstract semantics: L L → 2 ℓ ′ = ˆ ˆ f (ˆ Execute abstract semantics to obtain . ℓ ) Define abstraction map: α : L → ˆ L ˆ Prove simulates under . ℓ ′ f α
Illustrating the Standard Recipe
Malloc: The Language v := malloc()
Malloc: The Language lab : v := malloc()
Concrete Semantics State = Instruction × Store
Concrete Semantics State = Instruction × Store f ( ς ) = ς ′
Concrete Semantics State = Instruction × Store Fresh ] : � � i, σ [ v �→ a ′ ]) f ([ [ v := malloc() ] i, σ ) = (
Concrete Semantics State = Instruction × Store Fresh ] : � � i, σ [ v �→ a ′ ]) f ([ [ v := malloc() ] i, σ ) = ( a ′ = alloc( ς )
Concrete Semantics State = Instruction × Store Fresh ] : � � i, σ [ v �→ a ′ ]) f ([ [ v := malloc() ] i, σ ) = ( a ′ = alloc( ς ) = max(range( σ )) + 1
Abstract Semantics State = Instruction × � � Store ˆ ] : � � f ([ [ v := malloc() ] i, ˆ σ ) = ( i, ˆ σ [ v �→ ˆ a ]) a = � (from some finite set) ˆ alloc(ˆ ς )
What to allocate? • Abstract addresses = Scarce resource • Avoid over-allocation: Good for speed • Avoid under-allocation: Good for precision
Example: Over-allocation 3 ˆ a 1 ˆ a 2
Example: Over-allocation 3 ˆ a 1 , 2
Example: Under-allocation 3 a ′ ˆ 4
Example: Under-allocation 3 ˆ a 1 4 ˆ a 2
Allocation heuristics Observation: Objects from like contexts act alike.
Allocation heuristics Observation: Objects from like contexts act alike. � ] : � Example: alloc([ [ lab : . . . ] i, ) = lab
Annoyance: Soundness If α ( ς ) ⊑ ˆ ς then α ( f ( ς )) ⊑ ˆ f (ˆ ς )
Annoyance: Soundness If α ( ς ) ⊑ ˆ ς then α Addr (alloc( ς )) ⊑ � alloc(ˆ ς )
The Issue alloc( , σ ) = max(range( σ )) + 1 � ] : � alloc([ [ lab : . . . ] i, ) = lab What abstraction map will work here?
Example A : x := malloc() B : y := malloc() [x → , y → ] 1 σ = 2 [ → A , → B ] 2 α Addr = 1
Example B : y := malloc() A : x := malloc() [x → , y → ] 1 σ = 2 [ → A , → B ] 2 α Addr = 1
Example B : y := malloc() A : x := malloc() [x → , y → ] 2 σ = 1 [ → A , → B ] 2 α Addr = 1
Example B : y := malloc() A : x := malloc() [x → , y → ] 2 σ = 1 [ → A , → B ] α Addr = 2 1
Standard Solution Change the concrete semantics!
Standard Solution Change the concrete semantics! Addr = N alloc( , σ ) = max(range( σ )) + 1
Standard Solution Change the concrete semantics! Addr = N × Lab alloc([ [ lab : . . . ] ] , σ ) = (max(range( σ ) 1 ) + 1 , lab )
Standard Solution Change the concrete semantics! Addr = N × Lab alloc([ [ lab : . . . ] ] , σ ) = (max(range( σ ) 1 ) + 1 , lab ) α ( , lab ) = lab
Another problem: Heuristics sometimes make stupid decisions
Another problem: Heuristics sometimes make stupid decisions Why not adapt on the fly?
Example: Greedy Strategy Heuristic says, “Allocate , and bind 4.” ˆ a 1 3 ˆ a 1
Example: Greedy Strategy Heuristic says, “Allocate , and bind 4.” ˆ a 1 3 ˆ a 1 4
Example: Greedy Strategy Heuristic says, “Allocate , and bind 4.” ˆ a 1 3 ˆ a 1 4 Adaptive allocator says, “Try first.” r (ˆ a 1 )
Example: Greedy Strategy Heuristic says, “Allocate , and bind 4.” ˆ a 1 3 ˆ a 1 4 r (ˆ a 1 ) Adaptive allocator says, “Try first.” r (ˆ a 1 )
Example: Greedy Strategy Heuristic says, “Allocate , and bind 3.” ˆ a 2 3 ˆ a 1
Example: Greedy Strategy Heuristic says, “Allocate , and bind 3.” ˆ a 2 3 ˆ a 1 ˆ a 2
Example: Greedy Strategy Heuristic says, “Allocate , and bind 3.” ˆ a 2 3 ˆ a 1 ˆ a 2 Adaptive allocator says, “Just use .” ˆ a 1
Example: Greedy Strategy Heuristic says, “Allocate , and bind 3.” ˆ a 2 3 ˆ a 1 Adaptive allocator says, “Just use .” ˆ a 1
Dynamic Optimization Given m abstract addresses, how should they be allocated to maximize precision?
So, why not? Can’t within confines of standard recipe. (Counter-example in paper.)
Making it so
Making it so • Factor allocation out of semantics. • Make allocation nondeterministic. • Prove nondeterministic allocation sound.
Locative = Address (But also times, bindings, contours, etc .)
Factoring out allocation
f : State → State ς
f : State → State ς ′ ς
f : State → State ς
F : State → Loc → State ς
F : State → Loc → State ς
F : State → Loc → State ℓ ς ′ ς
� f : � ˆ State State → 2 ˆ ς
� f : � ˆ State State → 2 ς ′ ˆ ˆ ς ′′ ς ˆ ς ′′′ ˆ
� f : � ˆ State State → 2 ˆ ς
d Loc → � F : � ˆ State State → 2 ˆ ς
d Loc → � F : � ˆ State State → 2 ˆ ς
d Loc → � F : � ˆ State State → 2 ς ′ ˆ ˆ ℓ ′ ˆ ℓ ′′ ˆ ς ′′ ς ˆ ˆ ℓ ′′′ ς ′′′ ˆ
Nondeterministic Abstract Interpretation
Nondeterministic Abstract Interpretation • Sealed abstract transition graphs. • Factored abstraction maps. • A posteriori soundness condition.
Transition Graphs • Nodes = States • Edge = Transition labeled by chosen locative
Sealed Graphs Graph is sealed under factored semantics iff every state has an edge to cover every transition.
Example: Un sealed Graph 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 82 24 26 73 74 75 76 77 78 79 80 81 72 35 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 58 59 60 61 62 71 70 25 69 27 68 28 29 30 31 32 33 34 36 37 57 63 64 65 66 67
Example: Un sealed Graph 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 82 24 26 73 74 75 76 77 78 79 80 81 72 35 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 58 59 60 61 62 71 70 25 69 27 68 28 29 30 31 32 33 34 36 37 57 63 64 65 66 67
ˆ ℓ 1 ˆ ℓ 2 ˆ ς
ˆ h 1 (ˆ ℓ 1 ) ˆ ℓ 1 ˆ ˆ h 2 (ˆ ℓ 2 ℓ 2 ) ˆ ς ς ) = { ˆ h 1 , ˆ h 2 , ˆ ˆ F (ˆ h 3 }
Recommend
More recommend