adaptive sparse recovery with limited adaptivity
play

Adaptive Sparse Recovery with Limited Adaptivity Akshay Kamath Eric - PowerPoint PPT Presentation

Adaptive Sparse Recovery with Limited Adaptivity Akshay Kamath Eric Price UT Austin 2018-11-27 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 1 / 33 Outline Introduction 1 Analysis for k = 1 2


  1. Results in adaptive sparse recovery, C = O ( 1 ) Unlimited adaptivity: with unlimited rounds, k + log log n � m ∗ � k · log log n Limited adaptivity: with R = O ( 1 ) rounds, k + log 1 / R n � m ∗ � k · log 1 / ( R − 3 ) n . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 8 / 33

  2. Results in adaptive sparse recovery, C = O ( 1 ) Unlimited adaptivity: with unlimited rounds, k + log log n � m ∗ � k · log log n Limited adaptivity: with R = O ( 1 ) rounds, k + log 1 / R n � m ∗ � k · log 1 / ( R − 3 ) n . New results: with R = O ( 1 ) rounds, k · log 1 / R n � m ∗ � k · log 1 / R n log ∗ k With caveat: Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 8 / 33

  3. Results in adaptive sparse recovery, C = O ( 1 ) Unlimited adaptivity: with unlimited rounds, k + log log n � m ∗ � k · log log n Limited adaptivity: with R = O ( 1 ) rounds, k + log 1 / R n � m ∗ � k · log 1 / ( R − 3 ) n . New results: with R = O ( 1 ) rounds, k · log 1 / R n � m ∗ � k · log 1 / R n log ∗ k With caveat: the lower bound only applies for k < 2 log 1 / R n ⇒ m ∗ > k log k . ⇐ Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 8 / 33

  4. Results in adaptive sparse recovery, C = O ( 1 ) Unlimited adaptivity: with unlimited rounds, k + log log n � m ∗ � k · log log n Limited adaptivity: with R = O ( 1 ) rounds, k + log 1 / R n � m ∗ � k · log 1 / ( R − 3 ) n . New results: with R = O ( 1 ) rounds, k · log 1 / R n � m ∗ � k · log 1 / R n log ∗ k With caveat: the lower bound only applies for k < 2 log 1 / R n ⇒ m ∗ > k log k . ⇐ For k < n o ( 1 ) , m ∗ = ω ( k ) . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 8 / 33

  5. Outline Introduction 1 Analysis for k = 1 2 General k : lower bound 3 General k : upper bound 4 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 9 / 33

  6. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  7. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  8. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: ◮ R = 1 lower bound: Ω ( log n ) . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  9. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: ◮ R = 1 lower bound: Ω ( log n ) . ◮ Adaptive upper bound: O ( log log n ) . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  10. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: ◮ R = 1 lower bound: Ω ( log n ) . ◮ Adaptive upper bound: O ( log log n ) . ◮ Adaptive lower bound: Ω ( log log n ) . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  11. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: ◮ R = 1 lower bound: Ω ( log n ) . ◮ Adaptive upper bound: O ( log log n ) . ◮ Adaptive lower bound: Ω ( log log n ) . Hard case: x is random e z plus Gaussian noise w with � w � 2 ≈ 1. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  12. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: ◮ R = 1 lower bound: Ω ( log n ) . ◮ Adaptive upper bound: O ( log log n ) . ◮ Adaptive lower bound: Ω ( log log n ) . Hard case: x is random e z plus Gaussian noise w with � w � 2 ≈ 1. Robust recovery must locate z . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  13. Well-understood setting: k = 1 Theorem (Indyk-Price-Woodruff ’11, Price-Woodruff ’13) R-round 1 -sparse recovery requires Θ ( R log 1 / R n ) measurements. Outline of this section: ◮ R = 1 lower bound: Ω ( log n ) . ◮ Adaptive upper bound: O ( log log n ) . ◮ Adaptive lower bound: Ω ( log log n ) . Hard case: x is random e z plus Gaussian noise w with � w � 2 ≈ 1. Robust recovery must locate z . Observations � v , x � = v z + � v , w � = v z + � v � 2 √ n z , for z ∼ N ( 0, 1 ) . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 10 / 33

  14. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  15. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  16. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  17. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  18. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  19. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  20. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Shannon-Hartley theorem: AWGN channel capacity is I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) where SNR denotes the “signal-to-noise ratio,” Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  21. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Shannon-Hartley theorem: AWGN channel capacity is I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) where SNR denotes the “signal-to-noise ratio,” SNR = E [ signal 2 ] E [ noise 2 ] � E [ v 2 z ] � v � 2 2 / n Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  22. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Shannon-Hartley theorem: AWGN channel capacity is I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) where SNR denotes the “signal-to-noise ratio,” SNR = E [ signal 2 ] E [ noise 2 ] � E [ v 2 z ] 2 / n = 1 � v � 2 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  23. 1-sparse recovery: non-adaptive lower bound Observe � v , x � = v z + � v � 2 √ n z , where z ∼ N ( 0, Θ ( 1 )) Shannon-Hartley theorem: AWGN channel capacity is I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) where SNR denotes the “signal-to-noise ratio,” SNR = E [ signal 2 ] E [ noise 2 ] � E [ v 2 z ] 2 / n = 1 � v � 2 Finding z needs Ω ( log n ) non-adaptive measurements. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 11 / 33

  24. 1-sparse recovery: changes in adaptive setting Information capacity I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) . where SNR denotes the “signal-to-noise ratio,” SNR � E [ v 2 z ] 2 / n . � v � 2 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 12 / 33

  25. 1-sparse recovery: changes in adaptive setting Information capacity I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) . where SNR denotes the “signal-to-noise ratio,” SNR � E [ v 2 z ] 2 / n . � v � 2 If z is independent of v , this is 1. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 12 / 33

  26. 1-sparse recovery: changes in adaptive setting Information capacity I ( z , � v , x � ) � 1 2 log ( 1 + SNR ) . where SNR denotes the “signal-to-noise ratio,” SNR � E [ v 2 z ] 2 / n . � v � 2 If z is independent of v , this is 1. As we learn about z , we can increase the SNR. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 12 / 33

  27. 1-sparse recovery: adaptive upper bound x = e z + w Candidate set Signal 0 bits v SNR = 2 I ( z , � v , x � ) � log SNR = 1 � v , x � = v z + � v , w � Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 13 / 33

  28. 1-sparse recovery: adaptive upper bound x = e z + w Candidate set Signal 0 bits 1 bit v SNR = 2 2 I ( z , � v , x � ) � log SNR = 2 � v , x � = v z + � v , w � Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 13 / 33

  29. 1-sparse recovery: adaptive upper bound x = e z + w Candidate set Signal 0 bits 1 bit 2 bits v SNR = 2 4 I ( z , � v , x � ) � log SNR = 4 � v , x � = v z + � v , w � Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 13 / 33

  30. 1-sparse recovery: adaptive upper bound x = e z + w Candidate set Signal 0 bits 1 bit 2 bits 4 bits v SNR = 2 8 I ( z , � v , x � ) � log SNR = 8 � v , x � = v z + � v , w � Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 13 / 33

  31. 1-sparse recovery: adaptive upper bound x = e z + w Candidate set Signal 0 bits 1 bit 2 bits 4 bits 8 bits v SNR = 2 16 I ( z , � v , x � ) � log SNR = 16 � v , x � = v z + � v , w � Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 13 / 33

  32. 1-sparse recovery: adaptive lower bound Review of upper bound: ◮ Given b bits of information about z . ◮ Identifies z to set of size n / 2 b . ◮ Increases SNR , E [ v 2 z ] , by 2 b . ◮ Recover b bits of information in one measurement. ◮ 1 → 2 → · · · → log n in log log n measurements. ◮ R = 2: 1 → √ log n → log n in √ log n measurements/round. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 14 / 33

  33. 1-sparse recovery: adaptive lower bound Review of upper bound: ◮ Given b bits of information about z . ◮ Identifies z to set of size n / 2 b . ◮ Increases SNR , E [ v 2 z ] , by 2 b . ◮ Recover b bits of information in one measurement. ◮ 1 → 2 → · · · → log n in log log n measurements. ◮ R = 2: 1 → √ log n → log n in √ log n measurements/round. Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 14 / 33

  34. 1-sparse recovery: adaptive lower bound Review of upper bound: ◮ Given b bits of information about z . ◮ Identifies z to set of size n / 2 b . ◮ Increases SNR , E [ v 2 z ] , by 2 b . ◮ Recover b bits of information in one measurement. ◮ 1 → 2 → · · · → log n in log log n measurements. ◮ R = 2: 1 → √ log n → log n in √ log n measurements/round. Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. Lemma (Key lemma for k = 1) For any measurement vector v, I ( z ; � v , x � ) � b + 1 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 14 / 33

  35. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 15 / 33

  36. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Shannon-Hartley: � v 2 I ( z ; � v , x � ) � 1 z p z � v 2 2 log ( 1 + SNR ) � 1 + log z / n � 1 + n � p � ∞ Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 15 / 33

  37. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Shannon-Hartley: � v 2 I ( z ; � v , x � ) � 1 z p z � v 2 2 log ( 1 + SNR ) � 1 + log z / n � 1 + n � p � ∞ Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 15 / 33

  38. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Shannon-Hartley: � v 2 I ( z ; � v , x � ) � 1 z p z � v 2 2 log ( 1 + SNR ) � 1 + log z / n � 1 + n � p � ∞ Bound is good ( SNR ≈ 2 b ) when nonzero p z are similar. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 15 / 33

  39. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Shannon-Hartley: � v 2 I ( z ; � v , x � ) � 1 z p z � v 2 2 log ( 1 + SNR ) � 1 + log z / n � 1 + n � p � ∞ Bound is good ( SNR ≈ 2 b ) when nonzero p z are similar. Can be terrible in general: b = 1 but SNR = n / log n . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 15 / 33

  40. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  41. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  42. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : ◮ S J = { z | p z ∈ [ 2 J / n , 2 J + 1 / n ] } Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  43. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : ◮ S J = { z | p z ∈ [ 2 J / n , 2 J + 1 / n ] } ◮ E [ J ] � b . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  44. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : ◮ S J = { z | p z ∈ [ 2 J / n , 2 J + 1 / n ] } ◮ E [ J ] � b . I ( z ; � v , x � ) � I ( z ; � v , x � | J ) + H ( J ) . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  45. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : ◮ S J = { z | p z ∈ [ 2 J / n , 2 J + 1 / n ] } ◮ E [ J ] � b . I ( z ; � v , x � ) � I ( z ; � v , x � | J ) + H ( J ) . Shannon-Hartley: I ( z ; � v , x � | J = j ) � j + 1. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  46. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : ◮ S J = { z | p z ∈ [ 2 J / n , 2 J + 1 / n ] } ◮ E [ J ] � b . I ( z ; � v , x � ) � I ( z ; � v , x � | J ) + H ( J ) . Shannon-Hartley: I ( z ; � v , x � | J = j ) � j + 1. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  47. 1-sparse recovery: adaptive lower bound Lower bound outline: ◮ At each stage, have posterior distribution p on z . ◮ b = log n − H ( p ) = � p z log np z bits known. ◮ Show any measurement gives O ( b + 1 ) bits of information. Partition indices into “level sets” S 0 , S 1 , . . . ⊆ [ n ] of p : ◮ S J = { z | p z ∈ [ 2 J / n , 2 J + 1 / n ] } ◮ E [ J ] � b . I ( z ; � v , x � ) � I ( z ; � v , x � | J ) + H ( J ) . Shannon-Hartley: I ( z ; � v , x � | J = j ) � j + 1. Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 16 / 33

  48. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  49. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Suppose two rounds with m measurements each. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  50. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Suppose two rounds with m measurements each. ◮ O ( m ) bits learned in first round. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  51. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Suppose two rounds with m measurements each. ◮ O ( m ) bits learned in first round. ◮ O ( m 2 ) bits in second round. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  52. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Suppose two rounds with m measurements each. ◮ O ( m ) bits learned in first round. ◮ O ( m 2 ) bits in second round. ◮ Hence m � √ log n . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  53. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Suppose two rounds with m measurements each. ◮ O ( m ) bits learned in first round. ◮ O ( m 2 ) bits in second round. ◮ Hence m � √ log n . In general: Ω ( R log 1 / R n ) bits Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  54. 1-sparse recovery: adaptive lower bound: finishing up Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Suppose two rounds with m measurements each. ◮ O ( m ) bits learned in first round. ◮ O ( m 2 ) bits in second round. ◮ Hence m � √ log n . In general: Ω ( R log 1 / R n ) bits ◮ Ω ( log log n ) for unlimited R . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 17 / 33

  55. Outline Introduction 1 Analysis for k = 1 2 General k : lower bound 3 General k : upper bound 4 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 18 / 33

  56. Recall the k = 1 proof outline Setting: x = e z + w for z ∼ p . p is posterior on z from previous measurements. Previous measurements had information content b := log n − H ( p ) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 19 / 33

  57. Recall the k = 1 proof outline Setting: x = e z + w for z ∼ p . p is posterior on z from previous measurements. Previous measurements had information content b := log n − H ( p ) Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 19 / 33

  58. Recall the k = 1 proof outline Setting: x = e z + w for z ∼ p . p is posterior on z from previous measurements. Previous measurements had information content b := log n − H ( p ) Lemma (Key lemma for k = 1) I ( z ; � v , x � ) � b + 1 Question : How to extend this to k > 1? Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 19 / 33

  59. Extending to general k Create k independent copies over domain N = nk . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 20 / 33

  60. Extending to general k Create k independent copies over domain N = nk . Formally: x = � k i = 1 e ni + Z i + w for Z ∈ [ n ] k , Z ∼ p . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 20 / 33

  61. Extending to general k Create k independent copies over domain N = nk . Formally: x = � k i = 1 e ni + Z i + w for Z ∈ [ n ] k , Z ∼ p . p is posterior from previous measurements. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 20 / 33

  62. Extending to general k Create k independent copies over domain N = nk . Formally: x = � k i = 1 e ni + Z i + w for Z ∈ [ n ] k , Z ∼ p . p is posterior from previous measurements. Previous measurements have information content b := k log n − H ( p ) Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 20 / 33

  63. Extending to general k Create k independent copies over domain N = nk . Formally: x = � k i = 1 e ni + Z i + w for Z ∈ [ n ] k , Z ∼ p . p is posterior from previous measurements. Previous measurements have information content b := k log n − H ( p ) Lemma (Key lemma for general k ) I ( Z ; � v , x � ) � b + 1 ???? Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 20 / 33

  64. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  65. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  66. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  67. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  68. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  69. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. I ( Z W ; � v , x � ) � b / k + 1 for | W | > 0.99 k . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  70. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. I ( Z W ; � v , x � ) � b / k + 1 for | W | > 0.99 k . ◮ Strong enough, at least for constant R . Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  71. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. I ( Z W ; � v , x � ) � b / k + 1 for | W | > 0.99 k . ◮ Strong enough, at least for constant R . ◮ True for product distributions p ... Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  72. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. I ( Z W ; � v , x � ) � b / k + 1 for | W | > 0.99 k . ◮ Strong enough, at least for constant R . ◮ True for product distributions p ... ◮ but correlated p can make this false. Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  73. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. I ( Z W ; � v , x � ) � b / k + 1 for | W | > 0.99 k . ◮ Strong enough, at least for constant R . ◮ True for product distributions p ... ◮ but correlated p can make this false. I ( Z W ; � v , x � ) � b / k + log k Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

  74. What lemma do we want for general k ? I ( Z ; � v , x � ) � b + 1 ◮ True but too weak: would get Ω ( √ k log n ) not k √ log n . I ( Z ; � v , x � ) � b / k + 1 ◮ Strong but false: if algorithm does 1-sparse recovery on first block, it really can learn Θ ( b + 1 ) bits. ◮ But the learned bits are only about that first block. I ( Z W ; � v , x � ) � b / k + 1 for | W | > 0.99 k . ◮ Strong enough, at least for constant R . ◮ True for product distributions p ... ◮ but correlated p can make this false. I ( Z W ; � v , x � ) � b / k + log k ◮ True! Akshay Kamath, Eric Price (UT Austin) Adaptive Sparse Recovery with Limited Adaptivity 21 / 33

Recommend


More recommend