quantum chebyshev s inequality and applications
play

Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, - PowerPoint PPT Presentation

Quantum Chebyshevs Inequality and Applications Yassine Hamoudi, Frdric Magniez IRIF , Universit Paris Diderot, CNRS QuantAlgo 2018 arXiv: 1807.06456 Buffons needle A needle dropped randomly on a floor with equally spaced parallel


  1. Quantum Chebyshev’s Inequality and Applications Yassine Hamoudi, Frédéric Magniez IRIF , Université Paris Diderot, CNRS QuantAlgo 2018 arXiv: 1807.06456

  2. Buffon’s needle A needle dropped randomly on a floor with equally spaced parallel lines will cross one of the lines with probability 2/ π . Bu ff on, G., Essai d'arithmétique morale , 1777. � 2

  3. Monte Carlo algorithms: Use repeated random sampling and statistical analysis to estimate parameters of interest � 3

  4. Monte Carlo algorithms: Use repeated random sampling and statistical analysis to estimate parameters of interest Empirical mean: 1/ Repeat the experiment n times: n i.i.d. samples x 1 , …, x n ~ X 2/ Output: (x 1 +…+ x n )/n � 3

  5. Monte Carlo algorithms: Use repeated random sampling and statistical analysis to estimate parameters of interest Empirical mean: 1/ Repeat the experiment n times: n i.i.d. samples x 1 , …, x n ~ X 2/ Output: (x 1 +…+ x n )/n Law of large numbers: x 1 + . . . + x n n →∞ E ( X ) n � 3

  6. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? � 4

  7. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: Var ( X ) = E ( X 2 ) − E ( X ) 2 ≠ 0 Hypothesis: and finite E ( X ) ≠ 0 | ˜ Objective: with high probability μ − E ( X ) | ≤ ϵ E ( X ) multiplicative error 0 < ε < 1 � 4

  8. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: Var ( X ) = E ( X 2 ) − E ( X ) 2 ≠ 0 Hypothesis: and finite E ( X ) ≠ 0 | ˜ Objective: with high probability μ − E ( X ) | ≤ ϵ E ( X ) multiplicative error 0 < ε < 1 Number of samples needed: O ( ϵ 2 E ( X ) 2 ) E ( X 2 ) ϵ 2 ( O ( E ( X ) 2 − 1 ) ϵ 2 E ( X ) 2 ) = O E ( X 2 ) (in fact ) Var ( X ) 1 � 4

  9. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: Var ( X ) = E ( X 2 ) − E ( X ) 2 ≠ 0 Hypothesis: and finite E ( X ) ≠ 0 | ˜ Objective: with high probability μ − E ( X ) | ≤ ϵ E ( X ) Relative second multiplicative error 0 < ε < 1 moment Number of samples needed: O ( ϵ 2 E ( X ) 2 ) E ( X 2 ) ϵ 2 ( O ( E ( X ) 2 − 1 ) ϵ 2 E ( X ) 2 ) = O E ( X 2 ) (in fact ) Var ( X ) 1 � 4

  10. ˜ μ = x 1 + . . . + x n Empirical mean: x 1 , . . . , x n ∼ X with n How fast does it converge to E(X) ? Chebyshev’s Inequality: Var ( X ) = E ( X 2 ) − E ( X ) 2 ≠ 0 Hypothesis: and finite E ( X ) ≠ 0 | ˜ Objective: with high probability μ − E ( X ) | ≤ ϵ E ( X ) Relative second multiplicative error 0 < ε < 1 moment Number of samples needed: O ( ϵ 2 E ( X ) 2 ) E ( X 2 ) ϵ 2 ( O ( E ( X ) 2 − 1 ) ϵ 2 E ( X ) 2 ) = O E ( X 2 ) (in fact ) Var ( X ) 1 n = Ω ( ϵ 2 ) Δ 2 Δ 2 ≥ E ( X 2 ) In practice: given an upper-bound , take samples E ( X ) 2 � 4

  11. Other applications Counting with Markov chain Monte Carlo methods: Counting vs. sampling [Jerrum, Sinclair’96] [ Š tefankovi č et al.’09], Volume of convex bodies [Dyer, Frieze'91], Permanent [Jerrum, Sinclair, Vigoda’04] Data stream model: Frequency moments, Collision probability [Alon, Matias, Szegedy’99] [Monemizadeh, Woodru ff ’] [Andoni et al.’11] [Crouch et al.’16] Testing properties of distributions: Closeness [Goldreich, Ron’11] [Batu et al.’13] [Chan et al.’14] , Conditional independence [Canonne et al.’18] Estimating graph parameters: Number of connected components, Minimum spanning tree weight [Chazelle, Rubinfeld, Trevisan’05] , Average distance [Goldreich, Ron’08] , Number of triangles [Eden et al. 17] etc. � 5

  12. Random variable X over sample space Ω ⊂ R + Classical sample: one value x ∈ Ω , sampled with probability p x � 6

  13. Random variable X over sample space Ω ⊂ R + Classical sample: one value x ∈ Ω , sampled with probability p x Quantum sample: one (controlled-) execution of a quantum sampler or , where S − 1 S X X S X | 0 ⟩ = ∑ p x | ψ x ⟩ | x ⟩ x ∈Ω with ψ x = arbitrary garbage state | α x | 2 = p x ( can be replaced with any such that ) p x α x � 6

  14. Can we use quadratically less samples in the quantum setting? Sample space B / ( ϵ E ( X ) ) Ω ⊂ [0,B] Δ 2 ≥ E ( X 2 ) E ( X ) 2 Δ 2 ≥ E ( X 2 ) E(X) ≤ H E ( X ) 2 � 7

  15. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples Sample space B / ( ϵ E ( X ) ) Ω ⊂ [0,B] Δ 2 ≥ E ( X 2 ) E ( X ) 2 Δ 2 ≥ E ( X 2 ) E(X) ≤ H E ( X ) 2 � 7

  16. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples ??? for multiplicative error approximation | ˜ μ − E ( X ) | ≤ ϵ E ( X ) Sample space B / ( ϵ E ( X ) ) Ω ⊂ [0,B] Δ 2 ≥ E ( X 2 ) E ( X ) 2 Δ 2 ≥ E ( X 2 ) E(X) ≤ H E ( X ) 2 � 7

  17. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples ??? for multiplicative error approximation | ˜ μ − E ( X ) | ≤ ϵ E ( X ) Number of samples Conditions Δ 2 ≥ E ( X 2 ) Classical samples Δ 2 / ε 2 (Chebyshev’s E ( X ) 2 inequality) [Brassard et al.’11] Sample space B / ( ϵ E ( X ) ) [Wocjan et al.’09] Ω ⊂ [0,B] [Montanaro’15] Δ 2 ≥ E ( X 2 ) Δ 2 / ε or ( Δ / ε )*(H/L) [Montanaro’15] [Li, Wu’17] E ( X ) 2 Δ 2 ≥ E ( X 2 ) Our result ( Δ / ε )* log 3 (H/E(X)) E(X) ≤ H E ( X ) 2 � 7

  18. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples ??? for multiplicative error approximation | ˜ μ − E ( X ) | ≤ ϵ E ( X ) Number of samples Conditions Δ 2 ≥ E ( X 2 ) Classical samples Δ 2 / ε 2 (Chebyshev’s E ( X ) 2 inequality) [Brassard et al.’11] Sample space B / ( ϵ E ( X ) ) [Wocjan et al.’09] Ω ⊂ [0,B] [Montanaro’15] Δ 2 ≥ E ( X 2 ) Δ 2 / ε or ( Δ / ε )*(H/L) [Montanaro’15] [Li, Wu’17] E ( X ) 2 Δ 2 ≥ E ( X 2 ) Our result ( Δ / ε )* log 3 (H/E(X)) E(X) ≤ H E ( X ) 2 � 7

  19. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples ??? for multiplicative error approximation | ˜ μ − E ( X ) | ≤ ϵ E ( X ) Number of samples Conditions Δ 2 ≥ E ( X 2 ) Classical samples Δ 2 / ε 2 (Chebyshev’s E ( X ) 2 inequality) [Brassard et al.’11] Sample space B / ( ϵ E ( X ) ) [Wocjan et al.’09] Ω ⊂ [0,B] [Montanaro’15] Δ 2 ≥ E ( X 2 ) Δ 2 / ε or ( Δ / ε )*(H/L) [Montanaro’15] [Li, Wu’17] E ( X ) 2 Δ 2 ≥ E ( X 2 ) Our result ( Δ / ε )* log 3 (H/E(X)) E(X) ≤ H E ( X ) 2 � 7

  20. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples ??? for multiplicative error approximation | ˜ μ − E ( X ) | ≤ ϵ E ( X ) Number of samples Conditions Δ 2 ≥ E ( X 2 ) Classical samples Δ 2 / ε 2 (Chebyshev’s E ( X ) 2 inequality) [Brassard et al.’11] Sample space B / ( ϵ E ( X ) ) [Wocjan et al.’09] Ω ⊂ [0,B] [Montanaro’15] Δ 2 ≥ E ( X 2 ) Δ 2 / ε or ( Δ / ε )*(H/L) [Montanaro’15] L ≤ E(X) ≤ H [Li, Wu’17] E ( X ) 2 Δ 2 ≥ E ( X 2 ) Our result ( Δ / ε )* log 3 (H/E(X)) E(X) ≤ H E ( X ) 2 � 7

  21. Can we use quadratically less samples in the quantum setting? | ˜ Yes! for additive error approximation μ − E ( X ) | ≤ ϵ [Montanaro’15] Given σ 2 ≥ Var(X), σ / ε quantum samples vs σ 2 / ε 2 classical samples ??? for multiplicative error approximation | ˜ μ − E ( X ) | ≤ ϵ E ( X ) Number of samples Conditions Δ 2 ≥ E ( X 2 ) Classical samples Δ 2 / ε 2 (Chebyshev’s E ( X ) 2 inequality) [Brassard et al.’11] Sample space B / ( ϵ E ( X ) ) [Wocjan et al.’09] Ω ⊂ [0,B] [Montanaro’15] Δ 2 ≥ E ( X 2 ) Δ 2 / ε or ( Δ / ε )*(H/L) [Montanaro’15] L ≤ E(X) ≤ H [Li, Wu’17] E ( X ) 2 Δ 2 ≥ E ( X 2 ) Our result ( Δ / ε )* log 3 (H/E(X)) E(X) ≤ H E ( X ) 2 � 7

  22. Our Approach

  23. Subroutine: the Amplitude Estimation algorithm S X | 0 ⟩ = ∑ Sampler: on sample space Ω ⊂ [0,B] p x | ψ x ⟩ | x ⟩ x ∈Ω Result: O ( E ( X ) ) quantum samples to obtain | ˜ B μ − E ( X ) | ≤ ϵ E ( X ) ϵ � 9

Recommend


More recommend