compressed sensing challenges and emerging topics
play

Compressed Sensing: Challenges and Emerging Topics Mike Davies - PowerPoint PPT Presentation

IDCOM, University of Edinburgh Compressed Sensing: Challenges and Emerging Topics Mike Davies Edinburgh Compressed Sensing research group (E-CoS) Institute for Digital Communications University of Edinburgh IDCOM, University of Edinburgh


  1. IDCOM, University of Edinburgh Compressed Sensing: Challenges and Emerging Topics Mike Davies Edinburgh Compressed Sensing research group (E-CoS) Institute for Digital Communications University of Edinburgh

  2. IDCOM, University of Edinburgh Compressed sensing Engineering Challenges in CS : • What is the right signal model? Sometimes obvious, sometimes not. When can we exploit additional structure? • How can/should we sample? Physical constraints; can we sample randomly; effects of noise; exploiting structure; how many measurements? • What are our application goals? Reconstruction? Detection? Estimation?

  3. IDCOM, University of Edinburgh CS today – the hype! Papers published in Sparse Representations and CS [Elad 2012] Lots of papers….. lots of excitement…. lots of hype….

  4. IDCOM, University of Edinburgh CS today: - new directions & challenges There are many new emerging directions in CS and many challenges that have to be tackled. Fundamental limits in CS • Structured sensing matrices • Advanced signal models • Data driven dictionaries • Effects of quantization • n × l m×l m×n Continuous (off the grid) CS • Computationally efficient solutions • Measurement Measurements Matrix Compressive signal processing • Sparse Signal k nonzero rows

  5. IDCOM, University of Edinburgh Compressibility and Noise Robustness

  6. IDCOM, University of Edinburgh Noise/Model Robustness CS is robust to measurement noise (through RIP). What about signal errors, Φ � � � � � , or when � is not exactly sparse? No free lunch! Wideband spectral sensing Detecting signals through wide band receiver noise: noise folding! • – 3dB SNR loss per factor of 2 undersampling [Treichler et al 2011] Theory: -3 dB MC – solid per octave MWC - dashed input SNR = 20dB input SNR = 10dB input SNR = 0dB

  7. IDCOM, University of Edinburgh Noise/Model Robustness Sample-Distortion Bounds Compressible distributions Heavy tailed distributions may not be well • Gaussian approximated by low dimensional models • Fundamental limits in terms of compressibility Laplace of the probability distribution [D. & Guo. 2011; GGD, α =0.4 Gribonval et al 2012] Reconstruction SDR Implications for Compressive Imaging 25 • Wavelet coefficients not exactly sparse Signal to Distortion Ratio (dB) 23 • Limits CS imaging performance 21 SA+BAMP MBB Cman SA + BAMP 19 Adaptive sensing can retrieve lost SNR Cman Uniform + TurboAMP Cman SA + TurboAMP Cman ESA + TurboAMP 17 [Haupt et al 2011] Cman HSA + TurboAMP 0.1 0.15 0.2 0.25 0.3 Undersampling Ratio δ

  8. IDCOM, University of Edinburgh Sensing matrices

  9. IDCOM, University of Edinburgh Generalized Dimension Reduction Information preserving matrices can be used to preserve information beyond sparsity. Robust embeddings (RIP for difference vectors): Φ�� � � � � � � 1 � � �1 � �� � � �′ � � � � �′ � hold for many low dimensional sets. Sets of n points [Johnston and Lindenstrauss 1984] • �~��� �� log �� d-dimensional affine subspaces [Sarlos 2006] • �~��� �� �� Arbitrary Union of � k-dimensional subspaces [Blumensath and D. 2009] • �~��� �� �� � log ��� Set of r-rank n � � matrices [Recht et al 2010] • �~��� �� ��� � �� log ��� d-dimensional manifolds [Baraniuk and Wakin 2006, Clarkson 2008] • �~��� �� ��

  10. IDCOM, University of Edinburgh Structured CS sensing matrices i.i.d. sensing matrices are really only of academic interest. Need to consider wider classes, e.g.: Random rows of DFT [Rudelson & Vershynin 2008] • M x1 M x N N x N N x1 Fourier matrix � -RIP of order k with high probability if: � ~�(� � �� log � )

  11. IDCOM, University of Edinburgh Structured CS sensing matrices i.i.d. sensing matrices are really only of academic interest. Need to consider wider classes, e.g.: Random samples of a bounded orthogonal system [Rauhut 2010] • N x N M x1 M x N N x N N x1 Φ ∗ Ψ Also extends to continuous domain signals. � -RIP of order k with high probability if: �~�(� ! Φ, Ψ � � �� log � ) where ! Φ, Ψ = '()*+(, Φ ) , Ψ max is called the mutual coherence +

  12. IDCOM, University of Edinburgh Structured CS sensing matrices i.i.d. sensing matrices are really only of academic interest. Need to consider wider classes, e.g.: Universal Spread Spectrum sensing [Puy et al 2012] • N x N M x1 M x N N x N N x1 Fourier Ψ matrix Sensing matrix is random modulation followed by partial Fourier matrix. � -RIP of order k with high probability if: �~�(� � �� log . ) Independent of basis / !

  13. IDCOM, University of Edinburgh Fast Johnston Lindenstrauss Transform (FJLT) Can generate computationally fast dimension reducing transforms [Alon & Chazelle 2006] The FJLT provides optimal JL dimension reduction with • computation of �( log ) N x N N x N m x N m x1 Fourier/Hadamard matrix diagonal ±1s Φ Enables fast approx. nearest neighbour search • Used in related area of sketching… •

  14. IDCOM, University of Edinburgh Related ideas of Sketching e.g. want to solve � � -regression problem [Sarlos 06]: � ⋆ = argmin 5� − � � 3 with � ∈ ℝ 8 , A ∈ ℝ 8×: . Computational cost using normal equations: �(�� � ) N x N M x N Instead use Fast JL transform S ∈ ℝ <×8 to solve: Fourier/Hadamard matrix x = = argmin (>5)� − >� � 3 If �~ � ? � ⁄ then this guarantees: 5� = − � � ≤ (1 + ?) 5� − � � ⁄ )) with high probability and at a computational cost of: �(�� log � + poly(� ? Many other sketching results possible including for constrained LS, – approximate SVD, etc…

  15. IDCOM, University of Edinburgh Advanced signal models & algorithms

  16. IDCOM, University of Edinburgh CS with Low Dimensional Models What about sensing with other low dimensional signal models? – Matrix completion/rank minimization – Phase retrieval n × l m×l m×n – Tree based sparse recovery – Group/Joint Sparse recovery – Manifold recovery Measurement Measurements … towards a general model-based CS? Matrix [Baraniuk et al 2010, Blumensath 2011] Sparse Signal k nonzero rows

  17. IDCOM, University of Edinburgh Matrix Completion/Rank minimization Retrieve the unknown matrix C ∈ ℝ ,×D from a set of linear observations � = Φ C , � ∈ ℝ E with � < � . Suppose that C is rank r. Relax! as with � ' min., we convexify: replace rank(C) with the nuclear norm C ∗ = ∑ I ) , where I ) are the singular values of C . ) J = argmin C C ∗ subject to Φ(C) = � K Random measurements (RIP) ⟶ successful recovery if �~� � + � log � e.g. the Netflix prize – rate movies for individual viewers.

  18. IDCOM, University of Edinburgh Phase retrieval Generic problem: Unknown � ∈ ℂ 8 , magnitude only observations: � ) = A N � � Applications X-ray crystallography • Diffraction imaging • Spectrogram inversion • Phase Retrieval via Matrix Completion [Candes et al 2011] Phaselift Lift quadratic ⟶ linear problem using rank-1 matrix C = �� O J = argmin Solve: C C ∗ subject to P(C) = � K Provable performance but lifting space is huge! … surely more efficient solutions? Recent results indicate nonconvex solutions better.

  19. IDCOM, University of Edinburgh Tree Structured Sparse Representations Sparse signal models are type of "union of subspaces" model [Lu & Do 2008, Blumensath & Davies 2009] with an exponential number of subspaces. R , # subspaces Q (Stirling approx.) R Tree structure sparse sets have far fewer subspaces �S T # subspaces Q (Catalan numbers) RU' Example exploiting wavelet tree structures 50 100 Classical compressed sensing: stable inverses exist 150 when ⁄ � ~� � log � 200 250 50 100 150 200 250 With tree-structured sparsity we only need [Blumensath & D. 2009] �~� �

  20. IDCOM, University of Edinburgh Algorithms for model-based recovery Baraniuk et al. [2010] adapted CoSaMP & IHT to construct provably good ‘model-based’ recovery algorithms. sparse Tree sparse original reconstruction reconstruction Blumensath [2011] adapted IHT to reconstruct any low dimensional model from RIP-based CS measurements: � 8U' = V P � 8 + ! Φ W y � Φ� 8 where !~ /� is the step size, V P is the projection onto the signal model. Requires a computationally efficient V P operator.

  21. IDCOM, University of Edinburgh Model based CS for Quantitative MRI [Davies et al. SIAM Imag. Sci. 2014] Proposes new excitation and scanning protocols based on the Bloch model random uniform Individual aliased Random RF pulses subsampling images Quantitative Reconstruction Use Projected gradient algorithm with a discretized approximation of the Bloch response manifold.

  22. IDCOM, University of Edinburgh Compressed Signal Processing

Recommend


More recommend