Ackermann-Hardness for Lossy Counter Machines (and Reset Petri Nets) Philippe Schnoebelen LSV, CNRS & ENS Cachan + Oxford 1-year visitor QM EECS–TCS Seminar, London, June 20th 2012
Part I: Lossy counter machines 2/20
C OUNTER M ACHINES Finite state control + finite number of “counters” (say m ) + simple instructions and tests c 1 1 c 1 ++ c 2 >0? c 2 -- c 3 =0? ℓ 0 ℓ 1 ℓ 2 ℓ 3 c 2 4 c 3 0 Operational semantics: def = Loc × N C = { s , t ,... } , e.g., s 0 = ( ℓ 0 , 1 , 4 , 0 ) – Configurations: Conf – Steps: ( ℓ 0 , 1 , 4 , 0 ) − → ( ℓ 1 , 2 , 4 , 0 ) − → ( ℓ 2 , 2 , 3 , 0 ) − → ( ℓ 3 , 2 , 3 , 0 ) − → ··· A well-known model, Turing-powerful as soon as there are 2 counters 3/20
LCM = L OSSY COUNTER MACHINES LCM = Counter machines with unreliability: “counters decrease nondeterministically” [R. Mayr, TCS 2003] (Weaker) computational model useful, e.g., for logics like XPath or LTL+data. See decidability survey in [S., RP 2010]. Semantics. Reliable steps: s − → rel t as above def ⇔ s � s ′ − → rel t ′ � t for some s ′ and t ′ Lossy steps: s − → t where s = ( ℓ , a 1 ,..., a m ) � ( ℓ ′ , b 1 ,..., b m ) = s ′ def ⇔ ℓ = ℓ ′ ∧ a 1 � b 1 ∧ ... ∧ a m � b m Prop. [Monotony] s + → t implies s ′ + → t ′ for all s ′ � s and t ′ � t − − NB. ( Conf , � ) is a Well-Quasi-Ordering (a WQO) hence LCM’s are well-structured 4/20
Part II: Well-quasi-orderings and the length of bad sequences 5/20
WQO: W ELL - QUASI - ORDERINGS ( X , � ) is a well-quasi-ordering (a WQO) if any infinite sequence x 0 , x 1 , x 2 ... over X contains an increasing pair x i � x j (for some i < j ) Examples. 1. ( N k , � prod ) is a WQO (Dickson’s Lemma) where, e.g., � 3 , 2 , 1 � � � 5 , 2 , 2 � but � 1 , 2 , 3 � � � 5 , 2 , 2 � 2. ( Σ ∗ , ⊑ ) is a WQO (Higman’s Lemma) where, e.g., abc ⊑ bacbc but cba � bacbc Many other examples: ( Conf , � ) for LCM’s, finite trees with tree embedding (Kruskal’s Theorem), graphs ordered as minors (Robertson-Seymour Theorem), .. Systems where steps are monotonic wrt a WQO on configurations, called “well-structured systems” , enjoy generic decidability results [Finkel & S., TCS 2001] My current research program: Algorithmic aspects of WQO-theory & Complexity of WQO-based algorithms 6/20
L ENGTH OF BAD SEQUENCES def Def. A sequence x 0 , x 1 ,... over X is bad ⇔ there is no increasing pair x i � x j with i < j Now: Over a WQO, a bad sequence is necessarily finite. Complexity upper bounds ≃ “how long can a bad sequence be?” In general, bad sequences over a given WQO can be arbitrarily long. However, controlled bad sequences cannot: def ⇔ | x i | � g i ( n ) Def. x 0 , x 1 ,... is ( g , n ) -controlled Length Function Theorems are results of the form “Any ( g , n ) -controlled bad sequence x 0 , x 1 ,..., x l over X has length l � L X , g ( n ) ” for some bounding functions L X , g . 7/20
T HE F AST -G ROWING H IERARCHY A.k.a. The (Extended) Grzegorczyk Hierarchy For α = 0 , 1 , 2 ,... define F α : N → N with: def F 0 ( n ) = n + 1 (D1) n + 1 times � ������������� �� ������������� � def = F n + 1 F α + 1 ( n ) ( n ) = F α ( F α ( ... F α ( n ) ... )) (D2) α def F ω ( n ) = F n ( n ) ≃ Ackermann ( n ) (D3) 2 � F 1 ( n ) = 2 n + 1 . . This yields: F 2 ( n ) = ( n + 1 ) 2 n + 1 − 1 and n times . F 3 ( n ) > 2 2 F 4 is . . . impossible to grasp intuitively (at least for me) Length Function Theorem for N k . [LICS 2011, ICALP 2011] For primitive-recursive g , the length of ( g , n ) -controlled bad sequences over ( N k , � ) is in F k + O ( 1 ) (and in F k for small g ). 8/20
Part III: Upper bounds for LCM’s 9/20
D ECIDING T ERMINATION FOR LCM’ S (Non-)Termination. There is an infinite run s init = s 0 − → s 1 − → s 2 ··· iff there is a loop s init = s 0 − → ··· − → s k − → ··· − → s n = s k Hence termination is co-r.e. for LCM’s Furthermore. There is a loop from s init iff there is a loop that is a bad sequence (until s n − 1 ) Proof. Assume a length- n loop has an increasing pair s i � s j for i < j < n . Then we obtain a shorter loop by replacing s j − 1 − → s j by → s ′ s j − 1 − j = s i . Thus the shortest loop has no increasing pair Furthermore. Since necessarily s − → t implies | t | � | s | + 1 , any run is Succ -controlled Hence n � L A , Succ ( | s init | ) for A ≡ Loc × N | C | ≡ N m × | Loc | . Cor. Termination of LCM’s can be decided with complexity in F ω , and in F m when we fix | C | = m 10/20
D ECIDING R EACHABILITY FOR LCM’ S Same ideas work for reachability: “is there a run from s init to s goal ?” Proof. if a run s init = s 0 − → s 1 − → ··· − → s n = s goal has a decreasing pair s i � s j for 0 < i < j it can be shortened as s 0 − → ··· − → s i − 1 − → s j − → ··· − → s n Cor. If s goal can be reached from s init , this can be achieved via a run that is a (reversed) bad sequence But. How is the reversed run g -controlled for some g ? Prop. In the smallest run, | s i | � | s i + 1 | + 1 for all 0 < i < n Cor. Reachability in LCM’s can be decided with complexity in F ω , or F m (same as Termination) Nb. generic technique extends to other problems/models 11/20
Part IV: Lower Bounds via Simulation of Fast-Growing Functions 12/20
P ROBLEM S TATEMENT We have (rather disgusting) upper bounds on the complexity of verification for lossy counter machines. Do we have matching lower bounds? Answer. Unfortunately yes (see rest of this talk) NB. We mean lower bounds on the decision problems, not just on the simple algorithms we just saw Reduction stategy for proving lower bounds in lossy systems: 1. Compute unreliably fast-growing functions: Hardy hierarchy 2. Use this as an unreliable computational ressource 3. “Check” in the end that nothing was lost 4. Need computing unreliably the inverses of fast-growing functions 13/20
F AST -G ROWING VS . H ARDY H IERARCHY def def F 0 ( n ) = n + 1 H 0 ( n ) = n n + 1 times � ������������� �� ������������� � def def = F n + 1 F α + 1 ( n ) ( n ) = F α ( F α ( ... F α ( n ) ... )) H α + 1 ( n ) = H α ( n + 1 ) α def def F λ ( n ) = F λ n ( n ) H λ ( n ) = H λ n ( n ) Prop. H ω α ( n ) = F α ( n ) for all α and n Nb. H α ( n ) can be evaluated by transforming a pair H H → ··· H H α , n = α 0 , n 0 → α 1 , n 1 − − → α 2 , n 2 − − → α k , n k with α 0 > α 1 > α 2 > ··· until eventually α k = 0 and n k = H α ( n ) % tail-recursion!! Below we compute fast-growing functions and their inverses → α ′ , n ′ and α ′ , n ′ H by encoding α , n H → − 1 α , n − − 14/20
H → FOR α < ω ω M H : A LCM WEAKLY COMPUTING − Write α in CNF with coefficients α = ω m . a m + ω m − 1 . a m − 1 + ··· + ω 0 a 0 . Encoding of α is [ a m ,..., a 0 ] ∈ N m + 1 . [ a m ,..., a 0 + 1 ] , n H − → [ a m ,..., a 0 ] , n + 1 % H α + 1 ( n ) = H α ( n + 1 ) [ a m ,..., a k + 1 , 0 , 0 ,..., 0 ] , n H − → [ a m ,..., a k , n + 1 , 0 ,..., 0 ] , n % H λ ( n ) = H λ n ( n ) Recall ( γ + ω k + 1 ) n = γ + ω k · ( n + 1 ) 15/20
H → − 1 FOR α < ω ω M H − 1 : A LCM WEAKLY COMPUTING − → − 1 [ a m ,..., a 0 + 1 ] , n [ a m ,..., a 0 ] , n + 1 H − % H α + 1 ( n ) = H α ( n + 1 ) [ a m ,..., a k , n + 1 ,..., 0 ] , n H → − 1 [ a m ,..., a k + 1 , 0 ,..., 0 ] , n − % H λ ( n ) = H λ n ( n ) Prop. [Robustness] a � a ′ and n � n ′ imply H [ a ] ( n ) � H [ a ′ ] ( n ′ ) 16/20
C OUNTER MACHINES ON A BUDGET Ensures: 1. M b ⊢ ( ℓ , B , a ) ∗ → rel ( ℓ , B ′ , a ′ ) implies B + | a | = B ′ + | a ′ | − 2. M b ⊢ ( ℓ , B , a ) ∗ → rel ( ℓ , B ′ , a ′ ) implies M ⊢ ( ℓ , a ) ∗ − − → rel ( ℓ ′ , a ′ ) → rel ( ℓ , a ′ ) then ∃ B , B ′ : M b ⊢ ( ℓ , B , a ) ∗ 3. If M ⊢ ( ℓ , a ) ∗ → rel ( ℓ ′ , B ′ , a ′ ) − − 4. If M b ⊢ ( ℓ , B , a ) ∗ − → ( ℓ , B ′ , a ′ ) then M b ⊢ ( ℓ , B , a ) ∗ → rel ( ℓ , B ′ , a ′ ) iff B + | a | = B ′ + | a ′ | − 17/20
M ( m ) : W RAPPING IT UP Prop. M ( m ) has a lossy run ( ℓ H , a m : 1 , 0 ,..., n : m , 0 ,... ) ∗ − → ( ℓ H − 1 , 1 , 0 ,..., m , 0 ,... ) iff M ( m ) has a reliable run ( ℓ H , a m : 1 , 0 ,..., n : m , 0 ,... ) ∗ − → rel ( ℓ H − 1 , a m : 1 , 0 ,..., n : m , 0 ,... ) iff M has a reliable run from ℓ ini to ℓ fin that is bounded by H ω m ( m ) , i.e., by Ackermann ( m ) Cor. LCM verification is Ackermann-complete 18/20
C ONCLUSION Length of bad sequences is key to bounding the complexity of WQO-based algorithms Here verification people have a lot to learn from proof-theory and combinatorics Proving matching lower bounds is not necessarily tricky (and is easy for LCM’s or Reset Petri nets) but we still lack: — a collection of hard problems: Post Embedding Problem, . . . — a tutorial/textbook on subrecursive hierarchies (like fast-growing and Hardy hierarchies) — a toolkit of coding tricks and lemmas for ordinals The approach seems workable: recently we could characterize the complexity of Timed-Arc Petri nets and Data Petri Nets at F ω ωω 19/20
Recommend
More recommend