foundations of compressed sensing
play

Foundations of Compressed Sensing Mike Davies Edinburgh Compressed - PowerPoint PPT Presentation

IDCOM, University of Edinburgh Foundations of Compressed Sensing Mike Davies Edinburgh Compressed Sensing research group (E-CoS) Institute for Digital Communications University of Edinburgh IDCOM, University of Edinburgh Part I: Foundations of


  1. IDCOM, University of Edinburgh Foundations of Compressed Sensing Mike Davies Edinburgh Compressed Sensing research group (E-CoS) Institute for Digital Communications University of Edinburgh

  2. IDCOM, University of Edinburgh Part I: Foundations of CS • Introduction to sparse representations & compression • Compressed sensing – motivation and concept • Information preserving sensing matrices • Practical sparse reconstruction • Summary & engineering challenges

  3. IDCOM, University of Edinburgh Sparse representations and compression

  4. IDCOM, University of Edinburgh Fourier Representations The Frequency viewpoint (Fourier, 1822): Signals can be built from the sum of harmonic functions (sine waves) 1.5 1.5 1.5 1.5 1.5 1.5 Joseph Fourier 1 1 1 1 1 1 Atomic representation: 0.5 0.5 0.5 0.5 0.5 0.5 � � = � � � � � � �� = �� � 0 0 0 0 0 0 −0.5 −0.5 −0.5 −0.5 −0.5 −0.5 −1 −1 −1 −1 −1 −1 signal −1.5 −1.5 −1.5 −1.5 −1.5 −1.5 50 50 50 50 50 50 100 100 100 100 100 100 150 150 150 150 150 150 200 200 200 200 200 200 250 250 250 250 250 250 Fourier Harmonic coefficients functions

  5. IDCOM, University of Edinburgh Time-Frequency representations Time and Frequency (Gabor) “Theory of Communication,” J. IEE (London) , 1946 “… a new method of analysing signals is presented in which time and frequency play symmetrical parts…” Frequency (Hz) a Gabor ‘atom’ Time (s) Atomic (dictionary) representation: � � = ∑ ∑ � �,� × � � − �� � ���� = Φ� � �

  6. IDCOM, University of Edinburgh Space-Scale representations the wavelet viewpoint: “Daubechies, Ten Lectures on Wavelets ,” SIAM 1992 Images can be built of sums of wavelets . These are multi - resolution edge-like (image) functions.

  7. IDCOM, University of Edinburgh and many other representations … more recently: chirplets, curvelets, edgelets, wedgelets, … dictionary learning...

  8. IDCOM, University of Edinburgh Coding signals of interest What is the difference between quantizing a signal/image in the transform domain rather than the signal domain? Compressed to 0.5 bits per pixel Compressed to 2 bits per pixel Compressed to 0.1 bits per pixel Compressed to 1 bits per pixel Compressed to 2 bits per pixel Compressed to 3 bits per pixel 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 150 150 150 150 150 150 150 150 150 150 150 150 150 150 150 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 250 250 250 250 250 250 250 250 250 250 250 250 250 250 250 50 50 50 50 50 100 100 100 100 100 150 150 150 150 150 200 200 200 200 200 250 250 250 250 250 50 50 50 50 50 100 100 100 100 100 150 150 150 150 150 200 200 200 200 200 250 250 250 250 250 50 50 50 50 50 100 100 100 100 100 150 150 150 150 150 200 200 200 200 200 250 250 250 250 250 Quantization in Tom’s nonzero Quantization in wavelet domain wavelet coefficients pixel domain Good representations are efficient – e.g. sparse!

  9. IDCOM, University of Edinburgh Sparsity & Compression A vector x is k-sparse, if only k of its elements are non-zero. 0 0.5 0 0 0.1 0 − 0.2 0 0 0 0 0 - Such vectors have only k-degrees of freedom (k-dimensional) and there are “N choose k”, � � , possible combinations of nonzero coefficients. Coding cost: Coding cost: � floats + log ! � � ≈ Φ ⋅ � � bits N � floats = & � bits ⁄ = & � log ! � � bits � ≈ Φ�

  10. IDCOM, University of Edinburgh Compressed sensing: motivation and concepts

  11. IDCOM, University of Edinburgh Generalized Sampling Different ways to measure… Equivalent to inner product with various functions pointwise sampling, tomography, coded aperture,…

  12. IDCOM, University of Edinburgh Generalized Sampling Different ways to measure… Equivalent to inner product with various functions pointwise sampling, tomography, coded aperture,…

  13. IDCOM, University of Edinburgh Generalized Sampling Different ways to measure… Equivalent to inner product with various functions pointwise sampling, tomography, coded aperture,…

  14. IDCOM, University of Edinburgh New Challenges Challenge #1: Insufficient Measurements Complete measurements can be costly, time consuming and sometimes just impossible!

  15. IDCOM, University of Edinburgh New Challenges Challenge #2: Too much data e.g. DARPA ARGUS-IS 1.8 Gpixel image sensor 15cm resolution, 12 frames a second Giving a video rate output: 444 Gbits/s … but the comms link data rate is: 274 Mbits/s Currently visible spectrum. What about hyperspectral?…

  16. IDCOM, University of Edinburgh The new hope: Compressed Sensing E. Candès, J. Romberg, and T. Tao, “Robust Uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Information Theory, 2006 D. Donoho, “Compressed sensing,” IEEE Trans. Information Theory, 2006 Why can’t we just sample signals at the “Information Rate”? When compressing a signal we typically take lots of samples (sampling theorem), move to a transform domain, and then throw most of the coefficients away! Can we just sample what we need? Yes! …and more surprisingly we can do this non-adaptively.

  17. IDCOM, University of Edinburgh Potential applications Compressed Sensing provides a new way of thinking about signal acquisition. Applications areas already include: •Medical imaging •Hyperspectral imaging •Astronomical imaging •Distributed sensing •Radar sensing Rice University single pixel camera •Geophysical (seismic) exploration •High rate A/D conversion

  18. IDCOM, University of Edinburgh Compressed sensing Overview Observe � ∈ ℝ 4 via . ≪ � measurements, � ∈ ℝ 6 where � = Φ� Compressible Compressed Sensing assumes a set of interest compressible set of signals, i.e. approximately k-sparse. Using approximately � . ≥ & � log ! � random projections for measurements nonlinear we have little or no information loss. approximation random projection (reconstruction) Signal reconstruction by a nonlinear (observation) mapping. Many practical algorithms with guaranteed performance e.g. 0 1 min., OMP, CoSaMP, IHT.

  19. IDCOM, University of Edinburgh CS acquisition/reconstruction principle 50 Wavelet 100 image 150 200 250 50 100 150 200 250 1 Sparsifying transform original “Tom”

  20. IDCOM, University of Edinburgh CS acquisition/reconstruction principle 50 Wavelet 100 image 150 200 250 50 100 150 200 250 1 Sparsifying transform 2 X = original “Tom” Observed data

  21. IDCOM, University of Edinburgh CS acquisition/reconstruction principle Wavelet 50 Wavelet 50 image 100 image 100 roughly equivalent 150 150 200 200 250 250 50 100 150 200 250 50 100 150 200 250 1 3 Sparsifying Sparse transform Approximation 2 X = original “Tom” Observed data

  22. IDCOM, University of Edinburgh CS acquisition/reconstruction principle Wavelet Invert transform 50 Wavelet 50 image 100 image 100 roughly equivalent 150 150 4 200 200 250 250 50 100 150 200 250 50 100 150 200 250 1 3 Sparsifying Sparse transform Approximation 2 sparse “Tom” X = original “Tom” Observed data

  23. IDCOM, University of Edinburgh Information preserving sensing matrices

  24. IDCOM, University of Edinburgh Information preservation Underdetermined ( . < � ) linear systems m x1 m x N N x1 are not invertible: Φ� = Φ� 8 ⇏ � = � However, they may be invertible restricted to the sparse set: Σ ; ≔ �: supp(�) ≤ � Φ Uniqueness on Σ ; is equivalent to C Φ ∩ Σ !� = 0 m x1 2k x1 C(Φ) = {F: ΦF = 0} is null space of Φ We can then recover the original k-sparse vector using the Φ Q following H I minimization scheme: � J = argmin � I subject to Φ� = P O

  25. IDCOM, University of Edinburgh Robust Null Space Properties In order to achieve robustness we need to consider stronger NSPs [Cohen et al. 2009] introduced the notion of Instance Optimality and showed that the following are equivalent up to a change in constant C There exists a reconstruction mapping, Δ , such that for all � : 1. Δ Φ� − � 1 ≤ ST � � 1 where T � � 1 is the 0 1 best k-term approximation error of � 2. Φ satisfies the following NSP: F Q 1 ≤ S′T !� F 1 for all F ∈ C(Φ) and all k-sparse supports, Λ . Informally, null space vectors must be relatively flat.

  26. IDCOM, University of Edinburgh Deterministic Sensing Matrices Showing the NSP for a given Φ involves combinational computational complexity. The coherence of a matrix provides easily (but crude) computable guarantees. Coherence Φ Z , Φ � W Φ = max Φ Z Φ � 1YZ[�Y4 Using the coherence it is possible to show that Φ is invertible on the sparse set if: � < 1 1 2 1 + W(Φ) However, this only guarantees that �~&( .) .

Recommend


More recommend