Reaching Definitions Analysis kill and gen functions kill RD ([ x := a ] ℓ ) = { ( x, ?) } ∪{ ( x, ℓ ′ ) | B ℓ ′ is an assignment to x in S ⋆ } kill RD ([ skip ] ℓ ) = ∅ kill RD ([ b ] ℓ ) = ∅ gen RD ([ x := a ] ℓ ) = { ( x, ℓ ) } gen RD ([ skip ] ℓ ) = ∅ gen RD ([ b ] ℓ ) = ∅ data flow equations: RD = � { ( x, ?) | x ∈ FV ( S ⋆ ) } if ℓ = init ( S ⋆ ) RD entry ( ℓ ) = � { RD exit ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ flow ( S ⋆ ) } otherwise ( RD entry ( ℓ ) \ kill RD ( B ℓ )) ∪ gen RD ( B ℓ ) RD exit ( ℓ ) = where B ℓ ∈ blocks ( S ⋆ ) PPA Section 2.1 19 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: [ x:=5 ] 1 ; [ y:=1 ] 2 ; while [ x>1 ] 3 do ([ y:=x*y ] 4 ; [ x:=x-1 ] 5 ) kill and gen functions: ℓ kill RD ( ℓ ) gen RD ( ℓ ) 1 { ( x , ?) , ( x , 1) , ( x , 5) } { ( x , 1) } 2 { ( y , ?) , ( y , 2) , ( y , 4) } { ( y , 2) } 3 ∅ ∅ 4 { ( y , ?) , ( y , 2) , ( y , 4) } { ( y , 4) } 5 { ( x , ?) , ( x , 1) , ( x , 5) } { ( x , 5) } PPA Section 2.1 20 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): [ x:=5 ] 1 ; [ y:=1 ] 2 ; while [ x>1 ] 3 do ([ y:=x*y ] 4 ; [ x:=x-1 ] 5 ) Equations: RD entry (1) = { ( x , ?) , ( y , ?) } RD entry (2) = RD exit (1) RD entry (3) = RD exit (2) ∪ RD exit (5) RD entry (4) = RD exit (3) RD entry (5) = RD exit (4) RD exit (1) = ( RD entry (1) \{ ( x , ?) , ( x , 1) , ( x , 5) } ) ∪ { ( x , 1) } RD exit (2) = ( RD entry (2) \{ ( y , ?) , ( y , 2) , ( y , 4) } ) ∪ { ( y , 2) } RD exit (3) = RD entry (3) RD exit (4) = ( RD entry (4) \{ ( y , ?) , ( y , 2) , ( y , 4) } ) ∪ { ( y , 4) } RD exit (5) = ( RD entry (5) \{ ( x , ?) , ( x , 1) , ( x , 5) } ) ∪ { ( x , 5) } PPA Section 2.1 21 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): [ x:=5 ] 1 ; [ y:=1 ] 2 ; while [ x>1 ] 3 do ([ y:= x*y ] 4 ; [ x:=x-1 ] 5 ) Smallest solution: ℓ RD entry ( ℓ ) RD exit ( ℓ ) 1 { ( x , ?) , ( y , ?) } { ( y , ?) , ( x , 1) } 2 { ( y , ?) , ( x , 1) } { ( x , 1) , ( y , 2) } 3 { ( x , 1) , ( y , 2) , ( y , 4) , ( x , 5) } { ( x , 1) , ( y , 2) , ( y , 4) , ( x , 5) } 4 { ( x , 1) , ( y , 2) , ( y , 4) , ( x , 5) } { ( x , 1) , ( y , 4) , ( x , 5) } 5 { ( x , 1) , ( y , 4) , ( x , 5) } { ( y , 4) , ( x , 5) } PPA Section 2.1 22 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Why smallest solution? [ z:=x+y ] ℓ ; while [ true ] ℓ ′ do [ skip ] ℓ ′′ Equations: ❄ [ · · · ] ℓ RD entry ( ℓ ) = { ( x , ?) , ( y , ?) , ( z , ?) } RD entry ( ℓ ′ ) = RD exit ( ℓ ) ∪ RD exit ( ℓ ′′ ) RD entry ( ℓ ′′ ) = RD exit ( ℓ ′ ) ❄ ❄ no [ · · · ] ℓ ′ ✲ RD exit ( ℓ ) = ( RD entry ( ℓ ) \ { ( z , ?) } ) ∪{ ( z , ℓ ) } yes RD exit ( ℓ ′ ) = RD entry ( ℓ ′ ) ❄ [ · · · ] ℓ ′′ RD exit ( ℓ ′′ ) = RD entry ( ℓ ′′ ) After some simplification: RD entry ( ℓ ′ ) = { ( x , ?) , ( y , ?) , ( z , ℓ ) } ∪ RD entry ( ℓ ′ ) Many solutions to this equation: any superset of { ( x , ?) , ( y , ?) , ( z , ℓ ) } PPA Section 2.1 23 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Very Busy Expressions Analysis An expression is very busy at the exit from a label if, no matter what path is taken from the label, the expression is always used before any of the variables occurring in it are redefined. The aim of the Very Busy Expressions Analysis is to determine For each program point, which expressions must be very busy at the exit from the point. Example: point of interest ⇓ if [ a>b ] 1 then ([ x:= b-a ] 2 ; [ y:= a-b ] 3 ) else ([ y:= b-a ] 4 ; [ x:= a-b ] 5 ) The analysis enables a transformation into [ t1 := b-a ] A ; [ t2 := b-a ] B ; if [ a>b ] 1 then ([ x:=t1 ] 2 ; [ y:=t2 ] 3 ) else ([ y:=t1 ] 4 ; [ x:=t2 ] 5 ) PPA Section 2.1 24 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Very Busy Expressions Analysis – the basic idea ✻ kill � �� � N = ( X \ { all expressions with an x } ) ∪ { all subexpressions of a } � �� � gen x := a X = N 1 ∩ N 2 ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✯ ❨ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ N 1 N 2 PPA Section 2.1 25 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Very Busy Expressions Analysis kill and gen functions { a ′ ∈ AExp ⋆ | x ∈ FV ( a ′ ) } kill VB ([ x := a ] ℓ ) = kill VB ([ skip ] ℓ ) = ∅ kill VB ([ b ] ℓ ) = ∅ gen VB ([ x := a ] ℓ ) = AExp ( a ) gen VB ([ skip ] ℓ ) = ∅ gen VB ([ b ] ℓ ) = AExp ( b ) data flow equations: VB = � ∅ if ℓ ∈ final ( S ⋆ ) VB exit ( ℓ ) = � { VB entry ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ flow R ( S ⋆ ) } otherwise ( VB exit ( ℓ ) \ kill VB ( B ℓ )) ∪ gen VB ( B ℓ ) VB entry ( ℓ ) = where B ℓ ∈ blocks ( S ⋆ ) PPA Section 2.1 26 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: if [ a>b ] 1 then ([ x:=b-a ] 2 ; [ y:=a-b ] 3 ) else ([ y:=b-a ] 4 ; [ x:=a-b ] 5 ) kill and gen function: ℓ kill VB ( ℓ ) gen VB ( ℓ ) 1 ∅ ∅ 2 ∅ { b-a } 3 ∅ { a-b } 4 ∅ { b-a } 5 ∅ { a-b } PPA Section 2.1 27 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): if [ a>b ] 1 then ([ x:=b-a ] 2 ; [ y:=a-b ] 3 ) else ([ y:=b-a ] 4 ; [ x:=a-b ] 5 ) Equations: VB entry (1) = VB exit (1) = VB exit (2) ∪ { b-a } VB entry (2) VB entry (3) = { a-b } VB entry (4) = VB exit (4) ∪ { b-a } { a-b } VB entry (5) = VB exit (1) = VB entry (2) ∩ VB entry (4) VB exit (2) = VB entry (3) VB exit (3) = ∅ VB exit (4) = VB entry (5) VB exit (5) = ∅ PPA Section 2.1 28 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): if [ a>b ] 1 then ([ x:=b-a ] 2 ; [ y:=a-b ] 3 ) else ([ y:=b-a ] 4 ; [ x:=a-b ] 5 ) Largest solution: ℓ VB entry ( ℓ ) VB exit ( ℓ ) 1 { a-b , b-a } { a-b , b-a } { a-b , b-a } { a-b } 2 { a-b } ∅ 3 4 { a-b , b-a } { a-b } 5 { a-b } ∅ PPA Section 2.1 29 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Why largest solution? ( while [ x>1 ] ℓ do [ skip ] ℓ ′ ); [ x:=x+1 ] ℓ ′′ Equations: VB entry ( ℓ ) = VB exit ( ℓ ) ❄ ❄ VB entry ( ℓ ′ ) = VB exit ( ℓ ′ ) no [ · · · ] ℓ ′′ [ · · · ] ℓ ✲ VB entry ( ℓ ′′ ) = { x+1 } yes = VB entry ( ℓ ′ ) ∩ VB entry ( ℓ ′′ ) ❄ ❄ VB exit ( ℓ ) [ · · · ] ℓ ′ VB exit ( ℓ ′ ) = VB entry ( ℓ ) VB exit ( ℓ ′′ ) = ∅ After some simplifications: VB exit ( ℓ ) = VB exit ( ℓ ) ∩ { x+1 } Two solutions to this equation: { x+1 } and ∅ PPA Section 2.1 30 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Live Variables Analysis A variable is live at the exit from a label if there is a path from the label to a use of the variable that does not re-define the variable. The aim of the Live Variables Analysis is to determine For each program point, which variables may be live at the exit from the point. Example: point of interest ⇓ [ x := 2 ] 1 ; [ y:= 4 ] 2 ; [ x:= 1 ] 3 ; ( if [ y>x ] 4 then [ z:=y ] 5 else [ z:=y*y ] 6 ); [ x:=z ] 7 The analysis enables a transformation into [ y:= 4 ] 2 ; [ x:= 1 ] 3 ; ( if [ y>x ] 4 then [ z:=y ] 5 else [ z:=y*y ] 6 ); [ x:=z ] 7 PPA Section 2.1 31 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Live Variables Analysis – the basic idea ✻ kill ���� N = ( X \ { x } ) ∪ { all variables of a } � �� � gen x := a X = N 1 ∪ N 2 ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✯ ❍ ❨ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ ❍ N 1 N 2 PPA Section 2.1 32 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Live Variables Analysis kill and gen functions kill LV ([ x := a ] ℓ ) = { x } kill LV ([ skip ] ℓ ) = ∅ kill LV ([ b ] ℓ ) = ∅ gen LV ([ x := a ] ℓ ) = FV ( a ) gen LV ([ skip ] ℓ ) = ∅ gen LV ([ b ] ℓ ) = FV ( b ) data flow equations: LV = � ∅ if ℓ ∈ final ( S ⋆ ) LV exit ( ℓ ) = � { LV entry ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ flow R ( S ⋆ ) } otherwise ( LV exit ( ℓ ) \ kill LV ( B ℓ )) ∪ gen LV ( B ℓ ) LV entry ( ℓ ) = where B ℓ ∈ blocks ( S ⋆ ) PPA Section 2.1 33 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: [ x:= 2 ] 1 ; [ y:= 4 ] 2 ; [ x:= 1 ] 3 ; ( if [ y>x ] 4 then [ z:=y ] 5 else [ z:=y*y ] 6 ); [ x:=z ] 7 kill and gen functions: ℓ kill LV ( ℓ ) gen LV ( ℓ ) 1 { x } ∅ 2 { y } ∅ 3 { x } ∅ 4 ∅ { x , y } 5 { z } { y } 6 { z } { y } 7 { x } { z } PPA Section 2.1 34 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): [ x:= 2 ] 1 ; [ y:= 4 ] 2 ; [ x:= 1 ] 3 ; ( if [ y>x ] 4 then [ z:=y ] 5 else [ z:=y*y ] 6 ); [ x:=z ] 7 Equations: LV entry (1) = LV exit (1) \{ x } LV exit (1) = LV entry (2) LV entry (2) = LV exit (2) \{ y } LV exit (2) = LV entry (3) LV entry (3) = LV exit (3) \{ x } LV exit (3) = LV entry (4) LV entry (4) = LV exit (4) ∪ { x , y } LV exit (4) = LV entry (5) ∪ LV entry (6) ( LV exit (5) \{ z } ) ∪ { y } LV entry (5) = LV exit (5) = LV entry (7) LV entry (6) = ( LV exit (6) \{ z } ) ∪ { y } LV exit (6) = LV entry (7) LV entry (7) = { z } LV exit (7) = ∅ PPA Section 2.1 35 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): [ x:= 2 ] 1 ; [ y:= 4 ] 2 ; [ x:= 1 ] 3 ; ( if [ y>x ] 4 then [ z:=y ] 5 else [ z:=y*y ] 6 ); [ x:=z ] 7 Smallest solution: ℓ LV entry ( ℓ ) LV exit ( ℓ ) ∅ ∅ 1 2 ∅ { y } 3 { y } { x , y } 4 { x , y } { y } 5 { y } { z } 6 { y } { z } 7 { z } ∅ PPA Section 2.1 36 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Why smallest solution? ( while [ x>1 ] ℓ do [ skip ] ℓ ′ ); [ x:=x+1 ] ℓ ′′ Equations: LV entry ( ℓ ) = LV exit ( ℓ ) ∪ { x } ❄ ❄ LV entry ( ℓ ′ ) = LV exit ( ℓ ′ ) no [ · · · ] ℓ ′′ [ · · · ] ℓ ✲ LV entry ( ℓ ′′ ) = { x } yes = LV entry ( ℓ ′ ) ∪ LV entry ( ℓ ′′ ) ❄ ❄ LV exit ( ℓ ) [ · · · ] ℓ ′ LV exit ( ℓ ′ ) = LV entry ( ℓ ) LV exit ( ℓ ′′ ) = ∅ After some calculations: LV exit ( ℓ ) = LV exit ( ℓ ) ∪ { x } Many solutions to this equation: any superset of { x } PPA Section 2.1 37 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Derived Data Flow Information • Use-Definition chains or ud chains: each use of a variable is linked to all assignments that reach it [ x:= 0 ] 1 ; [ x:= 3 ] 2 ; ( if [ z=x ] 3 then [ z:= 0 ] 4 else [ z:=x ] 5 ); [ y:= x ] 6 ; [ x:=y+z ] 7 ✻ • Definition-Use chains or du chains: each assignment to a variable is linked to all uses of it [ x:= 0 ] 1 ; [ x := 3 ] 2 ; ( if [ z=x ] 3 then [ z:= 0 ] 4 else [ z:=x ] 5 ); [ y:=x ] 6 ; [ x:=y+z ] 7 ✻ ✻ ✻ PPA Section 2.1 38 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
ud chains ud : Var ⋆ × Lab ⋆ → P ( Lab ⋆ ) given by { ℓ | def ( x, ℓ ) ∧ ∃ ℓ ′′ : ( ℓ, ℓ ′′ ) ∈ flow ( S ⋆ ) ∧ clear ( x, ℓ ′′ , ℓ ′ ) } ud ( x, ℓ ′ ) = { ? | clear ( x, init ( S ⋆ ) , ℓ ′ ) } ∪ where ✲ [ · · · := x ] ℓ ′ [ x := · · · ] ℓ · · · ✲ ✲ ✲ � �� � no x := · · · • def ( x, ℓ ) means that the block ℓ assigns a value to x • clear ( x, ℓ, ℓ ′ ) means that none of the blocks on a path from ℓ to ℓ ′ contains an assignments to x but that the block ℓ ′ uses x (in a test or on the right hand side of an assignment) PPA Section 2.1 39 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
ud chains - an alternative definition UD : Var ⋆ × Lab ⋆ → P ( Lab ⋆ ) is defined by: � { ℓ ′ | ( x, ℓ ′ ) ∈ RD entry ( ℓ ) } if x ∈ gen LV ( B ℓ ) UD( x, ℓ ) = ∅ otherwise One can show that: ud ( x, ℓ ) = UD( x, ℓ ) PPA Section 2.1 40 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
du chains du : Var ⋆ × Lab ⋆ → P ( Lab ⋆ ) given by { ℓ ′ | def ( x, ℓ ) ∧ ∃ ℓ ′′ : ( ℓ, ℓ ′′ ) ∈ flow ( S ⋆ ) ∧ clear ( x, ℓ ′′ , ℓ ′ ) } if ℓ � = ? du ( x, ℓ ) = { ℓ ′ | clear ( x, init ( S ⋆ ) , ℓ ′ ) } if ℓ = ? ✲ [ · · · := x ] ℓ ′ [ x := · · · ] ℓ · · · ✲ ✲ ✲ � �� � no x := · · · One can show that: du ( x, ℓ ) = { ℓ ′ | ℓ ∈ ud ( x, ℓ ′ ) } PPA Section 2.1 41 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: [ x:= 0 ] 1 ; [ x:= 3 ] 2 ; ( if [ z=x ] 3 then [ z:= 0 ] 4 else [ z:=x ] 5 ); [ y:=x ] 6 ; [ x:=y+z ] 7 ud ( x, ℓ ) du ( x, ℓ ) x y z x y z 1 ∅ ∅ ∅ 1 ∅ ∅ ∅ 2 ∅ ∅ ∅ 2 { 3 , 5 , 6 } ∅ ∅ 3 { 2 } ∅ { ? } 3 ∅ ∅ ∅ 4 ∅ ∅ ∅ 4 ∅ ∅ { 7 } 5 { 2 } ∅ ∅ 5 ∅ ∅ { 7 } { 2 } ∅ ∅ ∅ { 7 } ∅ 6 6 ∅ { 6 } { 4 , 5 } ∅ ∅ ∅ 7 7 ? ∅ ∅ { 3 } PPA Section 2.1 42 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Theoretical Properties • Structural Operational Semantics • Correctness of Live Variables Analysis PPA Section 2.2 43 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
The Semantics A state is a mapping from variables to integers: σ ∈ State = Var → Z The semantics of arithmetic and boolean expressions A : AExp → ( State → Z ) (no errors allowed) B : BExp → ( State → T ) (no errors allowed) The transitions of the semantics are of the form � S, σ � → σ ′ � S, σ � → � S ′ , σ ′ � and PPA Section 2.2 44 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Transitions � [ x := a ] ℓ , σ � → σ [ x �→ A [ [ a ] ] σ ] � [ skip ] ℓ , σ � → σ � S 1 , σ � → � S ′ 1 , σ ′ � � S 1 ; S 2 , σ � → � S ′ 1 ; S 2 , σ ′ � � S 1 , σ � → σ ′ � S 1 ; S 2 , σ � → � S 2 , σ ′ � � if [ b ] ℓ then S 1 else S 2 , σ � → � S 1 , σ � if B [ [ b ] ] σ = true � if [ b ] ℓ then S 1 else S 2 , σ � → � S 2 , σ � if B [ [ b ] ] σ = false � while [ b ] ℓ do S, σ � → � ( S ; while [ b ] ℓ do S ) , σ � if B [ [ b ] ] σ = true � while [ b ] ℓ do S, σ � → σ if B [ [ b ] ] σ = false PPA Section 2.2 45 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: � [ y:=x ] 1 ; [ z:= 1 ] 2 ; while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 300 � � [ z:= 1 ] 2 ; while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 330 � → � while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 331 � → � [ z:=z*y ] 4 ; [ y:=y-1 ] 5 ; → while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 331 � � [ y:=y-1 ] 5 ; while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 333 � → � while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 323 � → � [ z:=z*y ] 4 ; [ y:=y-1 ] 5 ; → while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 323 � � [ y:=y-1 ] 5 ; while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 326 � → � while [ y>1 ] 3 do ([ z:=z*y ] 4 ; [ y:=y-1 ] 5 ); [ y:= 0 ] 6 , σ 316 � → � [ y:= 0 ] 6 , σ 316 � → → σ 306 PPA Section 2.2 46 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Equations and Constraints Equation system LV = ( S ⋆ ): � ∅ if ℓ ∈ final ( S ⋆ ) LV exit ( ℓ ) = � { LV entry ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ flow R ( S ⋆ ) } otherwise ( LV exit ( ℓ ) \ kill LV ( B ℓ )) ∪ gen LV ( B ℓ ) LV entry ( ℓ ) = where B ℓ ∈ blocks ( S ⋆ ) Constraint system LV ⊆ ( S ⋆ ): � ∅ if ℓ ∈ final ( S ⋆ ) LV exit ( ℓ ) ⊇ � { LV entry ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ flow R ( S ⋆ ) } otherwise ( LV exit ( ℓ ) \ kill LV ( B ℓ )) ∪ gen LV ( B ℓ ) LV entry ( ℓ ) ⊇ where B ℓ ∈ blocks ( S ⋆ ) PPA Section 2.2 47 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Lemma Each solution to the equation system LV = ( S ⋆ ) is also a solution to the constraint system LV ⊆ ( S ⋆ ). Proof: Trivial. Lemma The least solution to the equation system LV = ( S ⋆ ) is also the least solution to the constraint system LV ⊆ ( S ⋆ ). Proof: Use Tarski’s Theorem. Naive Proof: Proceed by contradiction. Suppose some LHS is strictly greater than the RHS. Replace the LHS by the RHS in the solution. Argue that you still have a solution. This establishes the desired con- tradiction. PPA Section 2.2 48 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Lemma A solution live to the constraint system is preserved during computation � S ′ , σ ′ � S ′′ , σ ′′ σ ′′′ � S, σ 1 � → 1 � → · · · → 1 � → 1 ✻ ✻ ✻ = LV ⊆ = LV ⊆ = LV ⊆ | | | ❄ ❄ ❄ · · · live live live Proof: requires a lot of machinery — see the book. PPA Section 2.2 49 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Correctness Relation σ 1 ∼ V σ 2 means that for all practical purposes the two states σ 1 and σ 2 are equal: only the values of the live variables of V matters and here the two states are equal. Example: Consider the statement [ x:=y+z ] ℓ Let V 1 = { y , z } . Then σ 1 ∼ V 1 σ 2 means σ 1 ( y ) = σ 2 ( y ) ∧ σ 1 ( z ) = σ 2 ( z ) Let V 2 = { x } . Then σ 1 ∼ V 2 σ 2 means σ 1 ( x ) = σ 2 ( x ) PPA Section 2.2 50 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Correctness Theorem The relation “ ∼ ” is invariant under computation: the live variables for the initial configuration remain live throughout the computation. � S ′ , σ ′ � S ′′ , σ ′′ σ ′′′ � S, σ 1 � → 1 � → · · · → 1 � → 1 ✻ ✻ ✻ ✻ ∼ V ∼ V ′ ∼ V ′′ ∼ V ′′′ ❄ ❄ ❄ ❄ � S ′ , σ ′ � S ′′ , σ ′′ σ ′′′ � S, σ 2 � → 2 � → · · · → 2 � → 2 V ′′ = live entry ( init ( S ′′ )) V = live entry ( init ( S )) V ′ = live entry ( init ( S ′ )) V ′′′ = live exit ( init ( S ′′ )) = live exit ( ℓ ) for some ℓ ∈ final ( S ) PPA Section 2.2 51 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Monotone Frameworks • Monotone and Distributive Frameworks • Instances of Frameworks • Constant Propagation Analysis PPA Section 2.3 52 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The Overall Pattern Each of the four classical analyses take the form � ι if ℓ ∈ E Analysis ◦ ( ℓ ) = � { Analysis • ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ F } otherwise Analysis • ( ℓ ) = f ℓ ( Analysis ◦ ( ℓ )) where – � is � or � (and ⊔ is ∪ or ∩ ), – F is either flow ( S ⋆ ) or flow R ( S ⋆ ), – E is { init ( S ⋆ ) } or final ( S ⋆ ), – ι specifies the initial or final analysis information, and – f ℓ is the transfer function associated with B ℓ ∈ blocks ( S ⋆ ). PPA Section 2.3 53 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The Principle: forward versus backward • The forward analyses have F to be flow ( S ⋆ ) and then Analysis ◦ concerns entry conditions and Analysis • concerns exit conditions; the equation system presupposes that S ⋆ has isolated entries. • The backward analyses have F to be flow R ( S ⋆ ) and then Analysis ◦ concerns exit conditions and Analysis • concerns entry conditions; the equation system presupposes that S ⋆ has isolated exits. PPA Section 2.3 54 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The Principle: union versus intersecton • When � is � we require the greatest sets that solve the equations and we are able to detect properties satisfied by all execution paths reaching (or leaving) the entry (or exit) of a label; the analysis is called a must-analysis. • When � is � we require the smallest sets that solve the equations and we are able to detect properties satisfied by at least one execution path to (or from) the entry (or exit) of a label; the analysis is called a may-analysis. PPA Section 2.3 55 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Property Spaces The property space , L , is used to represent the data flow information, and the combination operator , � : P ( L ) → L , is used to combine infor- mation from different paths. • L is a complete lattice , that is, a partially ordered set, ( L, ⊑ ), such that each subset, Y , has a least upper bound, � Y . • L satisfies the Ascending Chain Condition ; that is, each ascending chain eventually stabilises (meaning that if ( l n ) n is such that l 1 ⊑ l 2 ⊑ l 3 ⊑ · · · ,then there exists n such that l n = l n +1 = · · · ). PPA Section 2.3 56 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: Reaching Definitions • L = P ( Var ⋆ × Lab ⋆ ) is partially ordered by subset inclusion so ⊑ is ⊆ • the least upper bound operation � is � and the least element ⊥ is ∅ • L satisfies the Ascending Chain Condition because Var ⋆ × Lab ⋆ is finite (unlike Var × Lab ) PPA Section 2.3 57 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: Available Expressions • L = P ( AExp ⋆ ) is partially ordered by superset inclusion so ⊑ is ⊇ • the least upper bound operation � is � and the least element ⊥ is AExp ⋆ • L satisfies the Ascending Chain Condition because AExp ⋆ is finite (unlike AExp ) PPA Section 2.3 58 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Transfer Functions The set of transfer functions, F , is a set of monotone functions over L , meaning that l ⊑ l ′ implies f ℓ ( l ) ⊑ f ℓ ( l ′ ) and furthermore they fulfil the following conditions: • F contains all the transfer functions f ℓ : L → L in question (for ℓ ∈ Lab ⋆ ) • F contains the identity function • F is closed under composition of functions PPA Section 2.3 59 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Frameworks A Monotone Framework consists of: • a complete lattice, L , that satisfies the Ascending Chain Condition; we write � for the least upper bound operator • a set F of monotone functions from L to L that contains the identity function and that is closed under function composition A Distributive Framework is a Monotone Framework where additionally all functions f in F are required to be distributive: f ( l 1 ⊔ l 2 ) = f ( l 1 ) ⊔ f ( l 2 ) PPA Section 2.3 60 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Instances An instance of a Framework consists of: – the complete lattice, L , of the framework – the space of functions, F , of the framework – a finite flow, F (typically flow ( S ⋆ ) or flow R ( S ⋆ )) – a finite set of extremal labels , E (typically { init ( S ⋆ ) } or final ( S ⋆ )) – an extremal value , ι ∈ L , for the extremal labels – a mapping, f · , from the labels Lab ⋆ to transfer functions in F PPA Section 2.3 61 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Equations of the Instance: � { Analysis • ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ F } ⊔ ι ℓ Analysis ◦ ( ℓ ) = E � ι if ℓ ∈ E where ι ℓ E = ⊥ if ℓ / ∈ E Analysis • ( ℓ ) = f ℓ ( Analysis ◦ ( ℓ )) Constraints of the Instance: � { Analysis • ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ F } ⊔ ι ℓ Analysis ◦ ( ℓ ) ⊒ E � ι if ℓ ∈ E where ι ℓ E = ⊥ ∈ E if ℓ / Analysis • ( ℓ ) ⊒ f ℓ ( Analysis ◦ ( ℓ )) PPA Section 2.3 62 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The Examples Revisited Available Reaching Very Busy Live Expressions Definitions Expressions Variables P ( AExp ⋆ ) P ( Var ⋆ × Lab ⋆ ) P ( AExp ⋆ ) P ( Var ⋆ ) L ⊑ ⊇ ⊆ ⊇ ⊆ � � � � � ⊥ ∅ ∅ AExp ⋆ AExp ⋆ ι ∅ { ( x, ?) | x ∈ FV ( S ⋆ ) } ∅ ∅ { init ( S ⋆ ) } { init ( S ⋆ ) } final ( S ⋆ ) final ( S ⋆ ) E flow R ( S ⋆ ) flow R ( S ⋆ ) F flow ( S ⋆ ) flow ( S ⋆ ) F { f : L → L | ∃ l k , l g : f ( l ) = ( l \ l k ) ∪ l g } f ℓ ( l ) = ( l \ kill ( B ℓ )) ∪ gen ( B ℓ ) where B ℓ ∈ blocks ( S ⋆ ) f ℓ PPA Section 2.3 63 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Bit Vector Frameworks A Bit Vector Framework has • L = P ( D ) for D finite • F = { f | ∃ l k , l g : f ( l ) = ( l \ l k ) ∪ l g } Examples: • Available Expressions • Live Variables • Reaching Definitions • Very Busy Expressions PPA Section 2.3 64 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Lemma: Bit Vector Frameworks are always Distributive Frameworks Proof � � f ( l 1 ∪ l 2 ) (( l 1 ∪ l 2 ) \ l k ) ∪ l g f ( l 1 ⊔ l 2 ) = = f ( l 1 ∩ l 2 ) (( l 1 ∩ l 2 ) \ l k ) ∪ l g � � (( l 1 \ l k ) ∪ ( l 2 \ l k )) ∪ l g (( l 1 \ l k ) ∪ l g ) ∪ (( l 2 \ l k ) ∪ l g ) = = (( l 1 \ l k ) ∩ ( l 2 \ l k )) ∪ l g (( l 1 \ l k ) ∪ l g ) ∩ (( l 2 \ l k ) ∪ l g ) � f ( l 1 ) ∪ f ( l 2 ) = = f ( l 1 ) ⊔ f ( l 2 ) f ( l 1 ) ∩ f ( l 2 ) • id ( l ) = ( l \ ∅ ) ∪ ∅ • f 2 ( f 1 ( l )) = ((( l \ l 1 k ) ∪ l 1 g ) \ l 2 k ) ∪ l 2 g = ( l \ ( l 1 k ∪ l 2 k )) ∪ (( l 1 g \ l 2 k ) ∪ l 2 g ) • monotonicity follows from distributivity • P ( D ) satisfies the Ascending Chain Condition because D is finite PPA Section 2.3 65 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The Constant Propagation Framework An example of a Monotone Framework that is not a Distributive Frame- work The aim of the Constant Propagation Analysis is to determine For each program point, whether or not a variable has a constant value whenever execution reaches that point. Example: [ x:= 6] 1 ; [ y:= 3] 2 ; while [ x > y ] 3 do ([ x:=x − 1] 4 ; [ z:= y ∗ y ] 6 ) The analysis enables a transformation into [ x:= 6] 1 ; [ y:= 3] 2 ; while [ x > 3] 3 do ([ x:=x − 1] 4 ; [ z:= 9] 6 ) PPA Section 2.3 66 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Elements of L State CP = (( Var ⋆ → Z ⊤ ) ⊥ , ⊑ ) � Idea: • ⊥ is the least element: no information is available σ ∈ Var ⋆ → Z ⊤ specifies for each variable whether it is constant: • � – � σ ( x ) ∈ Z : x is constant and the value is � σ ( x ) – � σ ( x ) = ⊤ : x might not be constant PPA Section 2.3 67 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Partial Ordering on L The partial ordering ⊑ on ( Var ⋆ → Z ⊤ ) ⊥ is defined by σ ∈ ( Var ⋆ → Z ⊤ ) ⊥ : ∀ � ⊥ ⊑ � σ σ 2 ∈ Var ⋆ → Z ⊤ : ∀ � σ 1 , � σ 1 ⊑ � σ 2 iff ∀ x : � σ 1 ( x ) ⊑ � σ 2 ( x ) � where Z ⊤ = Z ∪ {⊤} is partially ordered as follows: ∀ z ∈ Z ⊤ : z ⊑ ⊤ ∀ z 1 , z 2 ∈ Z : ( z 1 ⊑ z 2 ) ⇔ ( z 1 = z 2 ) PPA Section 2.3 68 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Transfer Functions in F � F CP = { f | f is a monotone function on State CP } Lemma � Constant Propagation as defined by State CP and F CP is a Monotone Framework PPA Section 2.3 69 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Instances Constant Propagation is a forward analysis, so for the program S ⋆ : • the flow, F , is flow ( S ⋆ ), • the extremal labels, E , is { init ( S ⋆ ) } , • the extremal value, ι CP , is λx. ⊤ , and • the mapping, f CP , of labels to transfer functions is as shown next · PPA Section 2.3 70 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Constant Propagation Analysis State CP → Z ⊤ � A CP : AExp → ( ⊥ ) � ⊥ if � σ = ⊥ A CP [ [ x ] ] � = σ σ ( x ) otherwise � � ⊥ if � σ = ⊥ A CP [ [ n ] ] � σ = n otherwise A CP [ [ a 1 op a a 2 ] ] � = A CP [ [ a 1 ] ] � op a A CP [ [ a 2 ] ] � σ σ � σ transfer functions: f CP ℓ � ⊥ if � σ = ⊥ [ x := a ] ℓ : f CP ( � σ ) = ℓ σ [ x �→ A CP [ [ a ] ] � σ ] otherwise � [ skip ] ℓ : f CP ( � σ ) = σ � ℓ [ b ] ℓ : f CP ( � σ ) = σ � ℓ PPA Section 2.3 71 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Lemma Constant Propagation is not a Distributive Framework Proof for [ y:=x*x ] ℓ Consider the transfer function f CP ℓ σ 2 ( x ) = − 1 Let � σ 1 and � σ 2 be such that � σ 1 ( x ) = 1 and � σ 2 maps x to ⊤ — f CP Then � σ 1 ⊔ � ( � σ 1 ⊔ � σ 2 ) maps y to ⊤ ℓ Both f CP σ 1 ) and f CP σ 2 ) map y to 1 — f CP σ 1 ) ⊔ f CP ( � ( � ( � ( � σ 2 ) maps y to 1 ℓ ℓ ℓ ℓ PPA Section 2.3 72 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Equation Solving • The MFP solution — “Maximum” (actually least) Fixed Point – Worklist algorithm for Monotone Frameworks • The MOP solution — “Meet” (actually join) Over all Paths PPA Section 2.4 73 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The MFP Solution – Idea: iterate until stabilisation. Worklist Algorithm Input: An instance ( L, F , F, E, ι, f · ) of a Monotone Framework Output: The MFP Solution: MFP ◦ , MFP • Data structures: • Analysis: the current analysis result for block entries (or exits) • The worklist W: a list of pairs ( ℓ, ℓ ′ ) indicating that the current analysis result has changed at the entry (or exit) to the block ℓ and hence the entry (or exit) information must be recomputed for ℓ ′ PPA Section 2.4 74 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Worklist Algorithm Step 1 Initialisation (of W and Analysis) W := nil; for all ( ℓ, ℓ ′ ) in F do W := cons(( ℓ, ℓ ′ ),W); for all ℓ in F or E do if ℓ ∈ E then Analysis[ ℓ ] := ι else Analysis[ ℓ ] := ⊥ L ; Step 2 Iteration (updating W and Analysis) while W � = nil do ℓ := fst(head(W)); ℓ ′ = snd(head(W)); W := tail(W); if f ℓ (Analysis[ ℓ ]) �⊑ Analysis[ ℓ ′ ] then Analysis[ ℓ ′ ] := Analysis[ ℓ ′ ] ⊔ f ℓ (Analysis[ ℓ ]); for all ℓ ′′ with ( ℓ ′ , ℓ ′′ ) in F do W := cons(( ℓ ′ , ℓ ′′ ),W); Step 3 Presenting the result ( MFP ◦ and MFP • ) for all ℓ in F or E do MFP ◦ ( ℓ ) := Analysis[ ℓ ]; MFP • ( ℓ ) := f ℓ (Analysis[ ℓ ]) PPA Section 2.4 75 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Correctness The worklist algorithm always terminates and it computes the least (or MFP) solution to the instance given as input. Complexity Suppose that E and F contain at most b ≥ 1 distinct labels, that F contains at most e ≥ b pairs, and that L has finite height at most h ≥ 1. Count as basic operations the applications of f ℓ , applications of ⊔ , or updates of Analysis. Then there will be at most O ( e · h ) basic operations. Example: Reaching Definitions (assuming unique labels): O ( b 2 ) where b is size of program: O ( h ) = O ( b ) and O ( e ) = O ( b ). PPA Section 2.4 76 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The MOP Solution – Idea: propagate analysis information along paths. Paths The paths up to but not including ℓ : path ◦ ( ℓ ) = { [ ℓ 1 , · · · , ℓ n − 1 ] | n ≥ 1 ∧ ∀ i < n : ( ℓ i , ℓ i +1 ) ∈ F ∧ ℓ n = ℓ ∧ ℓ 1 ∈ E } The paths up to and including ℓ : path • ( ℓ ) = { [ ℓ 1 , · · · , ℓ n ] | n ≥ 1 ∧ ∀ i < n : ( ℓ i , ℓ i +1 ) ∈ F ∧ ℓ n = ℓ ∧ ℓ 1 ∈ E } Transfer functions for a path � ℓ = [ ℓ 1 , · · · , ℓ n ]: f � ℓ = f ℓ n ◦ · · · ◦ f ℓ 1 ◦ id PPA Section 2.4 77 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The MOP Solution The solution up to but not including ℓ : � ℓ ( ι ) | � MOP ◦ ( ℓ ) = { f � ℓ ∈ path ◦ ( ℓ ) } The solution up to and including ℓ : � ℓ ( ι ) | � MOP • ( ℓ ) = { f � ℓ ∈ path • ( ℓ ) } Precision of the MOP versus MFP solutions The MFP solution safely approximates the MOP solution: MFP ⊒ MOP (“because” f ( x ⊔ y ) ⊒ f ( x ) ⊔ f ( y ) when f is monotone). For Distributive Frameworks the MFP and MOP solutions are equal: MFP = MOP (“because” f ( x ⊔ y ) = f ( x ) ⊔ f ( y ) when f is distributive). PPA Section 2.4 78 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Lemma Consider the MFP and MOP solutions to an instance ( L, F , F, B, ι, f · ) of a Monotone Framework; then: MFP ◦ ⊒ MOP ◦ and MFP • ⊒ MOP • If the framework is distributive and if path ◦ ( ℓ ) � = ∅ for all ℓ in E and F then: MFP ◦ = MOP ◦ and MFP • = MOP • PPA Section 2.4 79 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Decidability of MOP and MFP The MFP solution is always computable (meaning that it is decidable) because of the Ascending Chain Condition. The MOP solution is often uncomputable (meaning that it is undecid- able): the existence of a general algorithm for the MOP solution would imply the decidability of the Modified Post Correspondence Problem , which is known to be undecidable. PPA Section 2.4 80 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Lemma The MOP solution for Constant Propagation is undecidable. Proof: Let u 1 , · · · , u n and v 1 , · · · , v n be strings over the alphabet { 1, · · · ,9 } ; let | u | denote the length of u ; let [ [ u ] ] be the natural number denoted. The Modified Post Correspondence Problem is to determine whether or not u i 1 · · · u i m = v i 1 · · · v i n for some sequence i 1 , · · · , i m with i 1 = 1. x:= [ [ u 1 ] ]; y:= [ [ v 1 ] ]; while [ · · · ] do ( if [ · · · ] then x:=x * 10 | u 1 | + [ ]; y:=y * 10 | v 1 | + [ [ u 1 ] [ v 1 ] ] else . . . if [ · · · ] then x:=x * 10 | u n | + [ ]; y:=y * 10 | v n | + [ [ u n ] [ v n ] ] else skip ) [ z:=abs (( x-y ) * ( x-y ))] ℓ Then MOP • ( ℓ ) will map z to 1 if and only if the Modified Post Corre- spondence Problem has no solution. This is undecidable. PPA Section 2.4 81 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Interprocedural Analysis • The problem • MVP: “Meet” over Valid Paths • Making context explicit • Context based on call-strings • Context based on assumption sets (A restricted treatment; see the book for a more general treatment.) PPA Section 2.5 82 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
The Problem: match entries with exits proc fib(val z, u; res v) ✛ is 1 ✲ ✛ ❄ no [ z<3 ] 2 yes ❄ ❄ ❄ [ call fib(x,0,y) ] 9 [ call fib(z-1,u,v) ] 4 [ v:=u+1 ] 3 10 5 ✛ ✻ ❄ ❄ [ call fib(z-2,v,v) ] 6 7 ✛ ❄ ❄ end 8 PPA Section 2.5 83 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Preliminaries Syntax for procedures Programs: P ⋆ = begin D ⋆ S ⋆ end D ::= D ; D | proc p ( val x ; res y ) is ℓ n S end ℓ x Declarations: S ::= · · · | [ call p ( a, z )] ℓ c Statements: ℓ r Example: proc fib ( val z , u ; res v ) is 1 begin if [ z<3 ] 2 then [ v:=u+1 ] 3 else ([ call fib ( z-1 , u , v )] 4 5 ; [ call fib ( z-2 , v , v )] 6 7 ) end 8 ; [ call fib ( x , 0 , y )] 9 10 end PPA Section 2.5 84 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Flow graphs for procedure calls init ([ call p ( a, z )] ℓ c ℓ r ) = ℓ c final ([ call p ( a, z )] ℓ c ℓ r ) = { ℓ r } blocks ([ call p ( a, z )] ℓ c { [ call p ( a, z )] ℓ c ℓ r ) = ℓ r } labels ([ call p ( a, z )] ℓ c ℓ r ) = { ℓ c , ℓ r } flow ([ call p ( a, z )] ℓ c { ( ℓ c ; ℓ n ) , ( ℓ x ; ℓ r ) } ℓ r ) = proc p ( val x ; res y ) is ℓ n S end ℓ x is in D ⋆ if • ( ℓ c ; ℓ n ) is the flow corresponding to calling a procedure at ℓ c and entering the procedure body at ℓ n , and • ( ℓ x ; ℓ r ) is the flow corresponding to exiting a procedure body at ℓ x and returning to the call at ℓ r . PPA Section 2.5 85 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Flow graphs for procedure declarations For each procedure declaration proc p ( val x ; res y ) is ℓ n S end ℓ x of D ⋆ : init ( p ) = ℓ n final ( p ) = { ℓ x } { is ℓ n , end ℓ x } ∪ blocks ( S ) blocks ( p ) = { ℓ n , ℓ x } ∪ labels ( S ) labels ( p ) = flow ( p ) = { ( ℓ n , init ( S )) } ∪ flow ( S ) ∪ { ( ℓ, ℓ x ) | ℓ ∈ final ( S ) } PPA Section 2.5 86 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Flow graphs for programs For the program P ⋆ = begin D ⋆ S ⋆ end : = init ( S ⋆ ) init ⋆ = final ( S ⋆ ) final ⋆ � { blocks ( p ) | proc p ( val x ; res y ) is ℓ n S end ℓ x is in D ⋆ } = blocks ⋆ ∪ blocks ( S ⋆ ) � { labels ( p ) | proc p ( val x ; res y ) is ℓ n S end ℓ x is in D ⋆ } = labels ⋆ ∪ labels ( S ⋆ ) � { flow ( p ) | proc p ( val x ; res y ) is ℓ n S end ℓ x is in D ⋆ } = flow ⋆ ∪ flow ( S ⋆ ) { ( ℓ c , ℓ n , ℓ x , ℓ r ) | proc p ( val x ; res y ) is ℓ n S end ℓ x is in D ⋆ = interflow ⋆ and [ call p ( a, z )] ℓ c ℓ r is in S ⋆ } PPA Section 2.5 87 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Example: proc fib ( val z , u ; res v ) is 1 begin if [ z<3 ] 2 then [ v:=u+1 ] 3 else ([ call fib ( z-1 , u , v )] 4 5 ; [ call fib ( z-2 , v , v )] 6 7 ) end 8 ; [ call fib ( x , 0 , y )] 9 10 end We have = { (1 , 2) , (2 , 3) , (3 , 8) , flow ⋆ (2 , 4) , (4; 1) , (8; 5) , (5 , 6) , (6; 1) , (8; 7) , (7 , 8) , (9; 1) , (8; 10) } { (9 , 1 , 8 , 10) , (4 , 1 , 8 , 5) , (6 , 1 , 8 , 7) } = interflow ⋆ and init ⋆ = 9 and final ⋆ = { 10 } . PPA Section 2.5 88 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
A naive formulation Treat the three kinds of flow in the same way: flow treat as ( ℓ 1 , ℓ 2 ) ( ℓ 1 , ℓ 2 ) ( ℓ c ; ℓ n ) ( ℓ c ,ℓ n ) ( ℓ x ; ℓ r ) ( ℓ x ,ℓ r ) Equation system: A • ( ℓ ) = f ℓ ( A ◦ ( ℓ )) � { A • ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ F or ( ℓ ′ ,ℓ ) ∈ F or ( ℓ ′ ,ℓ ) ∈ F } ⊔ ι ℓ A ◦ ( ℓ ) = E But there is no matching between entries and exits. PPA Section 2.5 89 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
MVP: “Meet” over Valid Paths Complete Paths We need to match procedure entries and exits: A complete path from ℓ 1 to ℓ 2 in P ⋆ has proper nesting of procedure entries and exits; and a procedure returns to the point where it was called: CP ℓ 1 ,ℓ 2 − → ℓ 1 whenever ℓ 1 = ℓ 2 CP ℓ 1 ,ℓ 3 − → ℓ 1 , CP ℓ 2 ,ℓ 3 whenever ( ℓ 1 , ℓ 2 ) ∈ flow ⋆ whenever P ⋆ contains [ call p ( a, z )] ℓ c CP ℓ c ,ℓ − → ℓ c , CP ℓ n ,ℓ x , CP ℓ r ,ℓ ℓ r and proc p ( val x ; res y ) is ℓ n S end ℓ x More generally: whenever ( ℓ c , ℓ n , ℓ x , ℓ r ) is an element of interflow ⋆ (or interflow R ⋆ for backward analyses); see the book. PPA Section 2.5 90 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Valid Paths A valid path starts at the entry node init ⋆ of P ⋆ , all the procedure exits match the procedure entries but some procedures might be entered but not yet exited: VP ⋆ − → VP init ⋆ ,ℓ whenever ℓ ∈ Lab ⋆ VP ℓ 1 ,ℓ 2 − → ℓ 1 whenever ℓ 1 = ℓ 2 VP ℓ 1 ,ℓ 3 − → ℓ 1 , VP ℓ 2 ,ℓ 3 whenever ( ℓ 1 , ℓ 2 ) ∈ flow ⋆ whenever P ⋆ contains [ call p ( a, z )] ℓ c VP ℓ c ,ℓ − → ℓ c , CP ℓ n ,ℓ x , VP ℓ r ,ℓ ℓ r and proc p ( val x ; res y ) is ℓ n S end ℓ x whenever P ⋆ contains [ call p ( a, z )] ℓ c VP ℓ c ,ℓ − → ℓ c , VP ℓ n ,ℓ ℓ r and proc p ( val x ; res y ) is ℓ n S end ℓ x PPA Section 2.5 91 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
The MVP solution � ℓ ( ι ) | � MVP ◦ ( ℓ ) = { f � ℓ ∈ vpath ◦ ( ℓ ) } � ℓ ( ι ) | � MVP • ( ℓ ) = { f � ℓ ∈ vpath • ( ℓ ) } where { [ ℓ 1 , · · · , ℓ n − 1 ] | n ≥ 1 ∧ ℓ n = ℓ ∧ [ ℓ 1 , · · · , ℓ n ] is a valid path } vpath ◦ ( ℓ ) = vpath • ( ℓ ) = { [ ℓ 1 , · · · , ℓ n ] | n ≥ 1 ∧ ℓ n = ℓ ∧ [ ℓ 1 , · · · , ℓ n ] is a valid path } The MVP solution may be undecidable for lattices satisfying the As- cending Chain Condition, just as was the case for the MOP solution. PPA Section 2.5 92 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Making Context Explicit Starting point: an instance ( L, F , F, E , ι, f · ) of a Monotone Framework • the analysis is forwards, i.e. F = flow ⋆ and E = { init ⋆ } ; • the complete lattice is a powerset, i.e. L = P ( D ); • the transfer functions in F are completely additive; and • each f ℓ is given by f ℓ ( Y ) = � { φ ℓ ( d ) | d ∈ Y } where φ ℓ : D → P ( D ). (A restricted treatment; see the book for a more general treatment.) PPA Section 2.5 93 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
An embellished monotone framework • L ′ = P ( ∆ × D ); • the transfer functions in F ′ are completely additive; and ℓ ( Z ) = � { { δ } × φ ℓ ( d ) | ( δ , d ) ∈ Z } . • each f ′ ℓ is given by f ′ Ignoring procedures, the data flow equations will take the form: f ′ A • ( ℓ ) = ℓ ( A ◦ ( ℓ )) for all labels that do not label a procedure call � { A • ( ℓ ′ ) | ( ℓ ′ , ℓ ) ∈ F or ( ℓ ′ ; ℓ ) ∈ F } ⊔ ι ′ ℓ A ◦ ( ℓ ) = E for all labels (including those that label procedure calls) PPA Section 2.5 94 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example: Detection of Signs Analysis as a Monotone Framework: ( L sign , F sign , F, E, ι sign , f sign ) where Sign = { - , 0 , + } and · L sign = P ( Var ⋆ → Sign ) The transfer function f sign associated with the assignment [ x := a ] ℓ is ℓ � f sign { φ sign ( σ sign ) | σ sign ∈ Y } ( Y ) = ℓ ℓ where Y ⊆ Var ⋆ → Sign and φ sign ( σ sign ) = { σ sign [ x �→ s ] | s ∈ A sign [ ]( σ sign ) } [ a ] ℓ PPA Section 2.5 95 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Example (cont.): Detection of Signs Analysis as an embellished monotone framework L ′ sign = P ( ∆ × ( Var ⋆ → Sign ) ) The transfer function associated with [ x := a ] ℓ will now be: � ′ ( Z ) = f sign { { δ } × φ sign ( σ sign ) | ( δ , σ sign ) ∈ Z } ℓ ℓ PPA Section 2.5 96 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Transfer functions for procedure declarations Procedure declarations proc p ( val x ; res y ) is ℓ n S end ℓ x have two transfer functions, one for entry and one for exit: f ℓ n , f ℓ x : P ( ∆ × D ) → P ( ∆ × D ) For simplicity we take both to be the identity function (thus incorpo- rating procedure entry as part of procedure call, and procedure exit as part of procedure return). PPA Section 2.5 97 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Transfer functions for procedure calls Procedure calls [ call p ( a, z )] ℓ c ℓ r have two transfer functions: For the procedure call f 1 ℓ c : P ( ∆ × D ) → P ( ∆ × D ) and it is used in the equation: f 1 for all procedure calls [ call p ( a, z )] ℓ c A • ( ℓ c ) = ℓ c ( A ◦ ( ℓ c )) ℓ r For the procedure return f 2 ℓ c ,ℓ r : P ( ∆ × D ) × P ( ∆ × D ) → P ( ∆ × D ) and it is used in the equation: f 2 for all procedure calls [ call p ( a, z )] ℓ c A • ( ℓ r ) = ℓ c ,ℓ r ( A ◦ ( ℓ c ) , A ◦ ( ℓ r )) ℓ r (Note that A ◦ ( ℓ r ) will equal A • ( ℓ x ) for the relevant procedure exit.) PPA Section 2.5 98 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Procedure calls and returns proc p ( val x ; res y ) is ℓ n ✲ ✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘ ✿ Z f 1 ℓ c ( Z ) ❄ ✬ ✫ ✪ [ call p ( a, z )] ℓ c ℓ r ✫ ✩ ✬ ② ❳ ❳ ❳ ❳ Z ❳ ❳ ❳ ❳ ❳ ❳ Z ′ ❳ ❳ ❳ ❳ ❄ ❳ ❳ ❳ ❳ ❳ ❳ ❳ end ℓ x ❳ f 2 ℓ c ,ℓ r ( Z, Z ′ ) ❄ PPA Section 2.5 99 c � F.Nielson & H.Riis Nielson & C.Hankin (May 2005)
Variation 1: ignore calling context upon return proc p ( val x ; res y ) is ℓ n ✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘✘ ✿ f 1 ❄ ℓ 1 [ call p ( a, z )] ℓ c [ call p ( a, z )] ℓ r ② ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ ❳ f 2 ❳ ❳ ❳ ❳ ❳ ❄ ℓ c ,ℓ r ❳ ❳ ❳ ❳ ❳ ❳ ❳ end ℓ x ❳ ❄ � ℓ c ( d ) | ( δ, d ) ∈ Z ∧ δ ′ = · · · δ · · · d · · · Z · · ·} f 1 {{ δ ′ } × φ 1 ℓ c ( Z ) = f 2 ℓ c ,ℓ r ( Z, Z ′ ) = f 2 ℓ r ( Z ′ ) PPA Section 2.5 100 � F.Nielson & H.Riis Nielson & C.Hankin (May 2005) c
Recommend
More recommend