span programs and other algorithmic tools
play

Span programs and other algorithmic tools Ashwin Nayak University - PowerPoint PPT Presentation

Span programs and other algorithmic tools Ashwin Nayak University of Waterloo Query algorithms Wish to compute a Boolean function f : {0,1} n {0,1} Input : x {0,1}, given as a black box i i O x b b x i Output :


  1. Span programs and other algorithmic tools Ashwin Nayak University of Waterloo

  2. Query algorithms • Wish to compute a Boolean function f : {0,1} n ⟶ {0,1} • Input : x ∈ {0,1}, given as a black box i i O x b b ⊕ x i • Output : f ( x ) output 0/1 | ¯ 0 i U 0 U 1 U 2 U t O x O x O x • With how few queries t can we compute f ?

  3. Query algorithms • Most of the known quantum algorithms fit the framework • e.g., Unordered search, Integer factoring • Model for most expensive operation • e.g., comparisons in Sorting • Often captures time complexity • Can show the limitation of heuristics • e.g., quantum parallelism in solving NP-hard problems

  4. Quantum algorithm design • We have seen several recipes, and their applications • Amplitude amplification : Unordered search • Fourier sampling : Factoring, Discrete logarithms • Quantum walks : Element distinctness, Triangle finding • Each speeds up a classical technique: brute force search, finding symmetries, search by random walk

  5. Is there a universal recipe? [Cartoon: Little boy to his math teacher, “Rather than learning how to solve that, shouldn’t we be learning how to operate software that can solve that problem?”]

  6. Adversary method (basic) [Ambainis, 2000] Development of span programs Adversary method [Høyer, Lee, Š palek, 2007] Optimality Near optimality of span programs of span programs [Lee, Mittal, Reichardt [Reichardt 2009] Š palek, Szegedy 2011] Span programs [Reichardt, Š palek, 2008] Algorithm for NAND tree [Farhi, Goldstone, Gutmann 2007]

  7. Span programs • Boolean function f : {0,1} n ⟶ {0,1} • Span program = sequence of vectors v x ∈ R n ⨂ R d , one for each input x , for some d • Each v x is of the form v x = ∑ j e j ⨂ v x,j • Constraint : ∑ j ∈ N ⟨ v x,j | v y,j ⟩ = 1 for every pair x , y s.t. f ( x ) ≠ f ( y ), N = { j : x j ≠ y j } • Complexity = max x || v x || 2

  8. Example : OR n ∨ • f ( x 1 , x 2 , …, x n ) = ( x 1 ∨ x 2 ∨ … ∨ x n ) x 1 x 2 x 3 x n · · · • v x ∈ R n ⨂ C, for every input x v 0 n = α (1 , 1 , 1 , . . . , 1 , . . . , 1) v y = 1 α (0 , 0 , 0 , . . . , 1 , . . . , 0) i -th coordinate y 6 = 0 n , y i = 1 (pick one such i ) • ∑ j ∈ N ⟨ v x,j | v y,j ⟩ = 1 when x = 0 n , and y ≠ 0 n • Complexity • || v 0…0 || 2 = α 2 n , || v y || 2 = 1/ α 2 (y ≠ 0 n ) • pick α 2 = 1/ √ n , so maximum = √ n

  9. n ⨂ R d , v x = ∑ j e j ⨂ v x,j • vectors v x ∈ R Properties of • ∑ j ∈ N ⟨ v x,j | v y,j ⟩ = 1 if f ( x ) ≠ f ( y ), N = { j : x j ≠ y j } span programs 2 • Complexity = max x || v x || 1. Bounded error query complexity of f asymptotically equals its span program complexity • Complexity of OR n = √ n (Grover search, optimal) 2. Span program for f is also a span program for ¬ f • AND n ( x ) = ¬ OR n (¬ x ) • define w x = v ¬x , complexity = √ n 3. Span program complexities of f , g = C f , C g , respectively ⇒ span program complexity of f ( g ( x 1 ), g( x 2 ), …, g( x n )) ≤ C f × C g

  10. Example : AND n -OR n • Naive algorithm: Nested Grover search • whenever we need the OR of n bits, recursively invoke search algorithm • Need to control accumulation of error ∧ • error ≤ 1/ √ n in recursive call suffices ∨ ∨ ∨ ∨ · · · • cost of error reduction = log n factor • query complexity ≤ √ n × √ n × log n = n log n x 1 x 2 x 3 x n • Composition property of span programs • query complexity ≤ √ n × √ n = n reproduces [Høyer, Mosca, de Wolf, 2003], optimal

  11. ∨ Properties of f 1 f 2 f 3 f n · · · span programs x 1 x 2 x 3 x n 4. Span program complexity of f i = C i , respectively ⇒ complexity of f 1 ( x 1 ) ∨ f 2 ( x 2 ) ∨ … ∨ f n ( x n ) ≤ ( C 12 + … + C n 2 ) 1/2 • Naive composition of algorithms gives √ n × max i C i × log n • Naive composition of span programs gives √ n × max i C i • Improvement due to weighted composition of span programs • Subsumes variable time search [Ambainis, 2008]

  12. A consequence • Evaluating read-once formulae with n variables over { ∨ , ⋀ ,¬} ∨ • Naive composition only effective for balanced ∨ formulae ¬ ∨ x 1 ¬ 2 + C 2 2 ) 1/2 • Complexity of f 1 ( x ) ∨ f 2 ( y ) is ( C 1 f 1 f 2 • Use DeMorgan rule to convert ANDs to ORs • Apply composition recursively to subtrees of OR gates • Net complexity is √ n (optimal) • Subsumes previous NAND-tree algorithms [Farhi, Goldstone, Gutmann, 2007; Ambainis, Childs, Reichardt, Š palek, Zhang, 2007]

  13. Is this the end of the road? [Cartoon: Junior computer programmer to supervisor, “If Facebook is already replacing e-mail, then we should get started on a replacement for Facebook.”]

  14. 1-certificate • Function f : D n ⟶ {0,1}, D = {0,1} or D = {0,…, n -1} • Suppose f ( x ) = 1. A 1-certificate for x is a subset S such that if y S = x S ⇒ f ( y ) = 1. • ( S , x S ) is a witness for f ( x ) = 1. • if f = OR n , ( i , 1) is a 1-certificate • if f = Triangle, any subgraph with a triangle is a 1- certificate • if f = Element Distinctness, any subset of indices with a repeated element is a 1-certificate

  15. Learning graphs [Belovs] • A schema for constructing span programs • A (non-adaptive) learning graph for f is a directed graph on subsets of indices • All the arcs have the form ( S , S ⋃ { i } ) for some i ∉ S (associated with the query to index i ) • The graph “computes” f if for every x with f ( x ) = 1, there is a path from ∅ to a 1-certificate for x ∅ • A learning graph for OR n : 1 2 3 n · · · • Span program has the same complexity as the learning graph

  16. Complexity of a learning graph • Each arc e is assigned a weight w e • 0-complexity C 0 = ∑ e w e • For each 1-input x , we have a flow p e ( x ) on the arcs • ∅ is the only source, with out-flow 1 • only 1-certificates for x may be sinks • incoming flow equals outgoing flow for all other nodes • 1-complexity C 1 = max 1-inputs x ∑ e p e ( x ) 2 / w e • Learning graph complexity = ( C 0 C 1 ) 1/2 ∅ • For OR n , C 0 = n, C 1 = 1 1 1 1 1 Complexity = √ n 1 2 3 n · · · For y 6 = 0 n , flow p e = 1 for one e where e = ( ; , { i } ), y i = 1

  17. Example: Element Distinctness • input x ∈ {0,1,…, n -1} n • f ( x ) = 0 iff all x i are distinct • 1-certificate for a 1-input x : a pair ( i,j ) with x i = x j out-degree n − r + 2 out-degree n − r + 1 w e = 1 w e = 1 w e = r − 2 ∅ . . . . . . . . . all r -subsets all ( r − 1)-subsets all ( r − 2)-subsets

  18. out-degree n − r + 2 out-degree n − r + 1 w e = 1 w e = 1 w e = r − 2 Complexity of Element Distinctness . ∅ . . . . . . . . all r -subsets all ( r − 1)-subsets all ( r − 2)-subsets • For a 1-input x : fix a pair ( i,j ) with x i = x j , i < j • Send equal flow to all ( r -2)-subsets disjoint from { i , j } • From an ( r -2)-subset S , send all flow to S ⋃ { i }, then to S ⋃ { i , j } • Complexity = r + √ n + n / √ r = n 2/3 Choose r = n 2/3 ; reproduces [Ambainis, 2004]

  19. Remarks • Span programs and learning graphs provide a method for designing quantum query algorithms through “static” constructs • Have led to new, more efficient algorithms • Triangle finding and k -Distinctness • Some algorithms have been reproduced with quantum walks • Improvements to some algorithms entail designing more sophisticated ( adaptive ) learning graphs • Optimum constructs are given by SDPs, can be computed explicitly for a small number of variables • SDP for optimal span program is dual to the adversary lower bound, therefore gives optimal algorithm

Recommend


More recommend