one bit compressed sensing with gaussian circulant
play

One-bit compressed sensing with Gaussian circulant matrices Sjoerd - PowerPoint PPT Presentation

One-bit compressed sensing with Gaussian circulant matrices Sjoerd Dirksen (RWTH Aachen University) based on joint work with Hans Christian Jung and Holger Rauhut (RWTH Aachen) Supported by DFG project QuaCoSS Compressed Sensing and its


  1. One-bit compressed sensing with Gaussian circulant matrices Sjoerd Dirksen (RWTH Aachen University) based on joint work with Hans Christian Jung and Holger Rauhut (RWTH Aachen) Supported by DFG project QuaCoSS Compressed Sensing and its Applications 2017 December 7, 2017 Sjoerd Dirksen 1 / 25

  2. Quantized compressed sensing Sjoerd Dirksen 2 / 25

  3. Quantized compressed sensing Goal: recover an s -sparse vector x ∈ R n from quantized underdetermined linear measurements A ∈ R m × n , y = Q ( Ax ) , where Q : R m → A m , A =quantization alphabet. Q quantizes each measurement to a finite bit string. Q can be ◮ memoryless , i.e., each measurement � a i , x � is quantized independently (e.g. uniform scalar quantization with A = δ Z ). ◮ adaptive , i.e., quantize � a i , x � based on previous (quantized) measurements (e.g. Σ-∆ quantization). Ideal design goals: minimal # measurements, # bits, energy consumption, computational cost, hardware cost. Sjoerd Dirksen 3 / 25

  4. One-bit compressed sensing This talk: one-bit compressed sensing model y = sign( Ax + τ ) , (1) where ◮ sign is the sign function applied element-wise ◮ τ ∈ R m is a vector consisting of thresholds ( → dithering). ◮ Memoryless if the τ i are fixed or independent random, adaptive if the τ i are chosen adaptively. First considered for τ = 0 by Baraniuk and Boufounos ’09. In this case, can only recover x / � x � 2 . Sjoerd Dirksen 4 / 25

  5. Quantization consistency ( τ = 0) Baraniuk and Boufounos proposed to reconstruct via min � z � 0 s . t . sign( Ax ) = sign( Az ) , � z � 2 = 1 . sign( Ax ) = sign( Az ) enforces quantization consistency. Theorem (Jacques-Laska-Boufounos-Baraniuk ’13) Let A be an m × n Gaussian matrix. If m � δ − 1 ( s log( n /δ ) + log( η − 1 )) , then w.p. at least 1 − η , for all x , x ′ with � x � 2 = � x ′ � 2 = 1 and � x � , � x ′ � 0 ≤ s, sgn( Ax ) = sgn( Ax ′ ) ⇒ � x − x ′ � 2 ≤ δ. Sjoerd Dirksen 5 / 25

  6. Quantization consistency ( τ = 0) Baraniuk and Boufounos proposed to reconstruct via min � z � 0 s . t . sign( Ax ) = sign( Az ) , � z � 2 = 1 . sign( Ax ) = sign( Az ) enforces quantization consistency. Theorem (Jacques-Laska-Boufounos-Baraniuk ’13) Let A be an m × n Gaussian matrix. If m � δ − 1 s log( n /δ ) , then w.h.p., for all x , x ′ with � x � 2 = � x ′ � 2 = 1 and � x � , � x ′ � 0 ≤ s, sgn( Ax ) = sgn( Ax ′ ) ⇒ � x − x ′ � 2 ≤ δ. Sjoerd Dirksen 6 / 25

  7. Pie cutting Sjoerd Dirksen 7 / 25

  8. b b b b b Pie cutting I a 1 Figure: We draw a first Gaussian a 1 . Its direction a 1 / � a 1 � 2 is distributed uniformly on the unit sphere Sjoerd Dirksen 8 / 25

  9. b b b b b Pie cutting II a 1 a ⊥ 1 1 = { x ∈ R n : � a 1 , x � = 0 } , which Figure: a 1 determines a hyperplane a ⊥ divides R n into { x ∈ R n : � a 1 , x � > 0 } and { x ∈ R n : � a 1 , x � < 0 } Sjoerd Dirksen 9 / 25

  10. b b b b b Pie cutting III + − a ⊥ 1 Figure: sgn( � a 1 , x � ) measures on which side of the hyperplane x lies. Sjoerd Dirksen 10 / 25

  11. b b b b b Pie cutting IV a ⊥ 2 a ⊥ 1 Figure: If we draw a second Gaussian a 2 , we obtain a new hyperplane a ⊥ 2 . The hyperplanes a ⊥ 1 and a ⊥ 2 cut the pie into 4 pieces. Sjoerd Dirksen 11 / 25

  12. b b b b b Pie cutting V a ⊥ 2 ++ − + + − −− a ⊥ 1 Figure: The bit string (sgn( � a 1 , x � ) , sgn( � a 2 , x � )) ∈ {− 1 , 1 } 2 encodes on which of the 4 pieces of pie x is located. A vector z is quantization consistent if it lies on the same piece of pie as x . Sjoerd Dirksen 12 / 25

  13. Quantization consistency ( τ = 0) Theorem (Jacques-Laska-Boufounos-Baraniuk ’13) Let A be an m × n Gaussian matrix. If m � δ − 1 s log( n /δ ) , then w.h.p., for all x , x ′ with � x � 2 = � x ′ � 2 = 1 and � x � , � x ′ � 0 ≤ s, sgn( Ax ) = sgn( Ax ′ ) ⇒ � x − x ′ � 2 ≤ δ. Underlying reason: f A ( v ) = sgn( Av ) is w.h.p. a √ ε -binary embedding on all s -sparse vectors: � ≤ √ ε � 1 � � md H ( f A ( v ) , f A ( w )) − d S n − 1 ( v , w ) ∀ s -sparse v , w . � � In particular, if d H ( f A ( v ) , f A ( w )) = 0, then d S n − 1 ( v , w ) ≤ √ ε . Results for general signal sets are known: Plan-Vershynin ’14, Oymak-Recht ’15. Sjoerd Dirksen 13 / 25

  14. One-bit CS: linear program Baraniuk and Boufounos proposed to reconstruct via min � z � 1 s . t . sign( Ax ) = sign( Az ) , � z � 2 = 1 . Plan and Vershynin ’13 considered z ∈ R n � z � 1 min s.t. sign( Az ) = sign( Ax ) , (LP) � π 1 2 � Az � 1 = 1 . m � 2 If A is standard Gaussian, then E � Az � 1 = π m � z � 2 . Theorem (Plan-Vershynin ’13) If m � δ − 1 s log 2 ( n / s ) then whp: for every x with � x � 1 ≤ √ s and � x � 2 = 1 the solution x # of (LP) satisfies � x − x # � 2 ≤ δ 1 / 5 . Sjoerd Dirksen 14 / 25

  15. Gaussian circulant matrices Sjoerd Dirksen 15 / 25

  16. Subsampled Gaussian circulant matrix For a g = ( g 1 , . . . , g n ) a Gaussian vector, the Gaussian circulant matrix is   g n g 1 g 2 · · · g n − 2 g n − 1 · · · g n − 1 g n g 1 g n − 3 g n − 2     g n − 2 g n − 1 g n · · · g n − 4 g n − 3 C g =    . . . . . .  . . . . . .   . . . . . .   · · · g 1 g 2 g 3 g n − 1 g n ◮ Model: A consists of m (uniformly) randomly selected rows of C g . ◮ Ax is a random sample of the discrete convolution g ∗ x . ◮ Applications (Romberg ’09): SAR radar imaging, Fourier optical imaging, channel estimation. Sjoerd Dirksen 16 / 25

  17. b b b b b Circulant case: one cut fixes all cuts a 1 a ⊥ 1 Figure: We draw a first Gaussian a 1 to form the first hyperplane. This fixes all other hyperplanes. Sjoerd Dirksen 17 / 25

  18. Guarantee via RIP 1 , 2 z ∈ R n � z � 1 min s.t. sign( Az ) = sign( Ax ) , � Az � 1 = 1 . (LP) A satisfies RIP eff 1 , 2 ( s , δ ) if (1 − δ ) � x � 2 ≤ � Ax � 1 ≤ (1 + δ ) � x � 2 (2) for all x ∈ R n with � x � 1 ≤ √ s � x � 2 . Theorem (Foucart, ’17) 1 , 2 (9 s , δ ) then for all x ∈ R n with Let δ ≤ 1 / 5 . If A satisfies RIP eff � x � 1 ≤ s and � x � 2 = 1 , the LP-reconstruction x # LP satisfies √ � x − x # LP � 2 ≤ 2 5 δ . � π 1 2 A , A standard Gaussian, satisfies (2) if m � δ − 2 s log( n / s ) m (Schechtman ’06). Sjoerd Dirksen 18 / 25

  19. Recovery of direction C g =Gaussian circulant matrix, R I ( x ) = ( x i ) i ∈ I . Let I = { i ∈ [ n ] : θ i = 1 } , where θ i are i.i.d. random selectors with mean m / n . Theorem (D., Jung, Rauhut ’17+) Let A = R I C g and δ < log − 1 / 2 ( s ) log − 1 / 4 ( n ) and suppose that m � δ − 1 s log( n / s ) (3) � s � δ n / log n . Then w.h.p.: for every x ∈ R n with � x � 1 ≤ s and � x � 2 = 1 , the LP-reconstruction x # LP satisfies � x − x # LP � 2 � δ 1 / 8 . For recovery of all exactly s -sparse vectors on the unit sphere a dependence δ 1 / 4 can be obtained. Sjoerd Dirksen 19 / 25

  20. Full vector recovery: Gaussian case y = sign( Ax + τ ) . Observation if A is Gaussian (Knudson-Saab-Ward ’16, Baraniuk-Foucart-Needell-Plan-Wootters ’17): can recover the full signal by taking Gaussian thresholds. Consider z ∈ R n � z � 1 min s.t. sign( Az + τ ) = sign( Ax + τ ) , (CP) � z � 2 ≤ R Theorem (Baraniuk-Foucart-Needell-Plan-Wootters ’17) A standard Gaussian, τ 1 , . . . , τ m independent N (0 , R 2 ) -distributed. If m � δ − 1 s log( n / s ) , then w.h.p.: for any x ∈ R n with � x � 1 ≤ √ s � x � 2 and � x � 2 ≤ R, any solution x # CP to (CP) satisfies � x − x # CP � 2 ≤ R δ 1 / 4 . Sjoerd Dirksen 20 / 25

  21. Full vector recovery: circulant case Theorem (D., Jung, Rauhut ’17+) A = R I C g , τ 1 , . . . , τ m independent N (0 , R 2 ) -distributed. Suppose δ < log − 1 / 2 ( s ) log − 1 / 4 ( n ) , m � δ − 1 s log( n / s ) � s � δ n / log n . Then w.h.p.: for any x ∈ R n with � x � 1 ≤ √ s � x � 2 and � x � 2 ≤ R, any solution x # CP to (CP) satisfies � x − x # CP � 2 ≤ R δ 1 / 8 . Sjoerd Dirksen 21 / 25

  22. b b b b b Dithering ↔ hyperplane shifts a 1 a ⊥ 1 1 = { x ∈ R n : � a 1 , x � = 0 } . Figure: a 1 determines a hyperplane a ⊥ 1 ,τ 1 = { x ∈ R n : � a 1 , x � + τ 1 = 0 } divides R n into a ⊥ { x ∈ R n : � a 1 , x � > − τ 1 } and { x ∈ R n : � a 1 , x � < − τ 1 } Sjoerd Dirksen 22 / 25

  23. Dithering ↔ back to direction recovery Let A = R I C g , h ∈ R n standard Gaussian. Then z ∈ R N � z � 1 min s.t. sign( Az + τ ) = sign( Ax + τ ) , � z � 2 ≤ R is z ∈ R N � z � 1 min s.t. sign( R I [ C g h ] ¯ z ) = sign( R I [ C g h ] ¯ x ) , � z � 2 ≤ R with ¯ z = [ z , R ] / � [ z , R ] � 2 , ¯ x = [ x , R ] / � [ x , R ] � 2 . Easy calculation: � x − z � 2 ≤ 2 R � ¯ x − ¯ z � 2 if � x � 2 , � z � 2 ≤ R . Conclusion: sufficient to show RIP 1 , 2 -property for (rescaled version of) [ C g h ]. Sjoerd Dirksen 23 / 25

Recommend


More recommend