fast reconstruction algorithms for deterministic sensing
play

Fast Reconstruction Algorithms for Deterministic Sensing Matrices - PowerPoint PPT Presentation

Fast Reconstruction Algorithms for Deterministic Sensing Matrices and Applications Robert Calderbank et al. Program in Applied and Computational Mathematics Princeton University NJ 08544, USA. Robert Calderbank et al. Fast Sensing Matrices and


  1. Fast Reconstruction Algorithms for Deterministic Sensing Matrices and Applications Robert Calderbank et al. Program in Applied and Computational Mathematics Princeton University NJ 08544, USA. Robert Calderbank et al. Fast Sensing Matrices and Applications

  2. Introduction What is Compressive Sensing? When sample by sample measurement is expensive and redundant: Compressive Sensing: Transform to low dimensional measurement domain Machine Learning: Filtering in the measurement domain Robert Calderbank et al. Fast Sensing Matrices and Applications

  3. Take-Home Message Compressed Sensing is a Credit Card! We want one with no hidden charges Robert Calderbank et al. Fast Sensing Matrices and Applications

  4. Geometry of Sparse Reconstruction Restricted Isometry Property (RIP) : An N × C matrix A satisfies ( k, ǫ ) -RIP if for any k -sparse signal x : (1 − ǫ ) � x � 2 ≤ � A x � 2 ≤ (1 + ǫ ) � x � 2 . Theorem [Candes,Tao2006] : √ If the entries of NA are iid sampled from N (0 , 1) Gaussian U ( − 1 , 1) Bernoulli k log( C , then with probability 1 − e − cN , � � distribution, and N = Ω k ) A has ( k, ǫ ) -RIP. Reconstruction Algorithm [Candes,Tao 2006 and Donoho 2006] : If A satisfies (3 k, ǫ ) -RIP for ǫ ≤ 0 . 4 , then given any k -sparse solution x to A x = b , the linear program minimize � z � 1 such that A z = b recovers x successfully, and is robust to noise. Robert Calderbank et al. Fast Sensing Matrices and Applications

  5. Expander Based Random Sensing A : Adjacency matrix of a (2 k, ǫ ) expander graph - No 2 k -sparse vector in the null space of A Theorem [Jafarpour, Xu, Hassibi, Calderbank 2008] : If ǫ ≤ 1 / 4 , then for any k -sparse solution x to A x = b , the solution can be recovered successfully in at most 2 k rounds. Gap: g t = b − A x t RHS proxy for difference between x t and x . ALGORITHM. Greedy reduction of gap. Robert Calderbank et al. Fast Sensing Matrices and Applications

  6. Two recent results Performance Bounds for Expander Sensing with Poisson Noise Let A : adjacency matrix of an expander graph x ∗ : sparse Noisy compressed sensing measurements y in Poisson model x = arg min � N ˆ j =1 (( Ax ) j − y j log( Ax ) j ) + γpen ( x ) Optimization over the simplex (positive values) pen: a well chosen penalty function. Then ˆ x ≈ x ∗ Two recent results

  7. k -Sparse Reconstruction with Random Sensing Matrices Approach Measurements Complexity Noise RIP N Resilience ` C C 3 ´ Basis Pursuit k log Yes Yes k (BP) [CRT] k 2 log α ( C ) k log α ( C ) Orthogonal Matching No Yes Pursuit (OMP) [GSTV] k log α ( C ) k log α ( C ) Group Testing [CM] No No ` C ` C ´ ´ Greedy Expander k log C log No RIP-1 k k Recovery [JXHC] ` C C 3 Expanders (BP) [BGIKS] k log ´ Yes RIP-1 k ` C ` C ´ ´ Expander Matching k log C log Yes RIP-1 k k Pursuit(EMP) [IR] ` C ` C ´ ´ CoSaMP [NT] k log Ck log Yes Yes k k ` C ` C ´ ´ SSMP [DM] k log Ck log Yes Yes k k Robert Calderbank et al. Fast Sensing Matrices and Applications

  8. Random Signals or Random Filters? Random Sensing Outside the mainstream of signal processing: Worst Case 1 Signal Processing Less efficient recovery time 2 No explicit constructions 3 Larger storage 4 Looser recovery bounds 5 Deterministic Sensing Aligned with the mainstream of signal processing : Average 1 Case Signal Processing More efficient recovery time 2 Explicit constructions 3 Efficient storage 4 Tighter recovery bounds 5 Robert Calderbank et al. Fast Sensing Matrices and Applications

  9. k -Sparse Reconstruction with Deterministic Sensing Matrices Approach Measurements Complexity Noise RIP N Resilience LDPC Codes [BBS] k log C C log C Yes No k 2 Reed-Solomon k No No codes [AT] k (log C ) α C 3 Embedding ℓ 2 spaces No No into ℓ 1 (BP) [GLR] kC o (1) log( C ) kC o (1) Extractors [Ind] No No √ Discrete chirps [AHSC] C kN log N Yes StRIP kN log 2 N √ log C Delsarte-Goethals 2 Yes StRIP codes [CHS] Robert Calderbank et al. Fast Sensing Matrices and Applications

  10. StRIP is Simple to Design A : N × C matrix satisfying columns form a group under pointwise multiplication rows are orthogonal and all row sums are zero α : k -sparse signal where positions of the k nonzero entries are equiprobable Theorem: Given δ with 1 > δ > k − 1 C − 1 , then with high probability (1 − δ ) � α � 2 ≤ � Aα � 2 ≤ (1 + δ ) � α � 2 Proof: Linearity of expectation � � Aα � 2 � ≈ � α � 2 E � � Aα � 2 � VAR → 0 as N → ∞ Robert Calderbank et al. Fast Sensing Matrices and Applications

  11. Two recent results Uniqeness of sprase representation and ℓ 1 receovery McDiarmid’s inequality: Given a function f for which ∀ x 1 , · · · , x k , x ′ i : � ≤ c i , � � f ( x i , · · · , x i , · · · , x k ) − f ( x i , · · · , x ′ � i , · · · , x k ) and given X 1 , · · · , X k independent random variables. Then � − 2 η 2 � Pr [ f ( X 1 , · · · , X k ) ≥ E [ f ( X 1 , · · · , X k )] + η ] ≤ exp � c 2 . i Relaxed assumption: � � � � ϕ i ( x ) | 2 − | � � ϕ j ( x ) | 2 � ≤ N 2 − η , ∀ i, j : � | � � � � x x Then: Uniqueness of sparse representation 1 ℓ 1 recovery of complex Steinhaus (random phase arbitrary 2 magnitude) signals. Two recent results

  12. Kerdock Sets Kerdock set K m : 2 m binary symmetric m × m matrices Tensor C 0 ( x, y, a ) : F 2 m × F 2 m × F 2 m → F 2 given by Tr [ xya ] = ( x 0 , . . . , x m − 1 ) P 0 ( a )( y 0 , . . . , y m − 1 ) T Theorem: The difference of any two matrices P 0 ( a ) in K m is nonsingular Proof: Non-degeneracy of the trace Example: m = 3 , primitive irreducible polynomial g ( x ) = x 3 + x + 1 0 1 0 1 0 1 1 0 0 0 0 1 0 1 0 P 0 (100) = A , P 0 (010) = A , P 0 (001) = 0 0 1 0 1 0 1 0 1 @ @ @ A 0 1 0 1 0 1 0 1 1 Robert Calderbank et al. Fast Sensing Matrices and Applications

  13. Delsarte-Goethals Sets Tensor C t ( x, y, a ) : F 2 m × F 2 m × F 2 m → F 2 given by C t ( x, y, a ) = Tr [( xy 2 t + x 2 t y ) a ] = ( x 0 , . . . , x m − 1 ) P t ( a )( y 0 , . . . , y m − 1 ) T Delsarte-Goethals Set DG ( m, r ) : 2 ( r +1) m binary symmetric m × m matrices � r � � P t ( a t ) | a 0 , . . . , a r ∈ F 2 m DG ( m, r ) = t =0 Framework for exploiting prior information about the signal Theorem: The difference of any two matrices in DG ( m, r ) has rank at least m − 2 r Proof: Non-degeneracy of the trace Robert Calderbank et al. Fast Sensing Matrices and Applications

  14. Incorporating Prior Information Via the Delsarte-Goethals Sets The Delsarte-Goethals structure imparts an order of preference on the columns of a Reed-Muller sensing matrix � � m, m − 1 K m = DG ( m, 0) ⊂ DG ( m, 1) ⊂ · · · ⊂ DG 2 Better inner products ← → Worse inner products 2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5 −2 0 100 200 300 400 500 If a prior distribution on the positions of the sparse components is known, the DG structure provides a means to assign the best columns to the components most likely present Robert Calderbank et al. Fast Sensing Matrices and Applications

  15. Reed-Muller Sensing Matrices � φ P,b ( x ) � P ∈ DG ( m, r ) , b ∈ Z m A = : 2 A has N = 2 m rows and C = 2 ( r +2) m columns φ P,b ( x ) = i wt ( d p )+2 wt ( b ) i xP x T +2 bx T Union of 2 ( r +1) m orthonormal basis Γ P Coherence between bases Γ P and Γ Q determined by R = rank ( P + Q ) Theorem: Any vector in Γ P has inner product 2 − R/ 2 with 2 R vectors in Γ Q and is orthogonal to the remaining vectors Proof: Exponential sums or properties of the symplectic group Sp (2 m, 2) Robert Calderbank et al. Fast Sensing Matrices and Applications

  16. Quadratic Reconstruction Algorithm k f ( x + a ) f ( x ) = 1 | α j | 2 ( − 1) aP j x T + 1 � � α j α t φ P j ,b j ( x + a ) φ P t ,b t ( x ) N N j =1 j � = t j =1 | α j | 2 ( − 1) aP j x T : Concentrates energy at k Walsh-Hadamard tones. � k 1 N � k 1 j =1 | α j | 4 : Signal energy in the Walsh-Hadamard tones N The second term distributes energy uniformly across all N tones – the l th Fourier coefficient is 1 ( − 1) lx T φ P j ,b j ( x + a ) φ P t ,b t ( x ) Γ l � � a = α j α t N 3 / 2 j � = t x Theorem: lim N →∞ E [ N 2 | Γ l a | 2 ] = � j � = t | α j | 2 | α t | 2 x,a | f ( x + a ) f ( x ) | 2 � 2 �� [Note: � f � 4 = ] Robert Calderbank et al. Fast Sensing Matrices and Applications

  17. Quadratic Reconstruction Algorithm Example: N = 2 10 and C = 2 55 Robert Calderbank et al. Fast Sensing Matrices and Applications

  18. Fundamental Limits Information Theoretic Rule of Thumb: Number of measurements N required by Basis Pursuit satisfies � � 1 + C N > k log 2 k Kerdock Sensing: C = 2 20 , k = 70 RM (2 , m ): C = 2 55 , k = 20 N = 1024 versus 971 N = 1024 versus 1014 500 450 400 350 300 250 10 20 30 40 50 60 70 80 K (# components) Robert Calderbank et al. Fast Sensing Matrices and Applications

Recommend


More recommend