compressed sensing and bayesian experimental design
play

Compressed Sensing and Bayesian Experimental Design or Optimal - PowerPoint PPT Presentation

Compressed Sensing and Bayesian Experimental Design or Optimal Sensing and Reconstruction of N - Dimensional Signals by Matthias Seeger & Hannes Nickisch Presenter: Pete Trautman Outline Intro to compressive sensing Paper


  1. Compressed Sensing and Bayesian Experimental Design or Optimal Sensing and Reconstruction of N - Dimensional Signals by Matthias Seeger & Hannes Nickisch Presenter: Pete Trautman

  2. Outline • Intro to compressive sensing • Paper presentation

  3. Sensing by sampling: f ( x ) f N ( x ) Pixel basis N � f ( x i ) δ ( x − x i ) f ( x ) ≈ f N ( x ) = i =1 Wavelet basis K K → ˆ � � c i ψ i f = < f N , ψ i > ψ i = − i =1 i =1

  4. Introduction to Compressive Sensing Image reconstruction: Original image f N ( x ) Wavelet coe ffi cients c i threshold all but 25000 largest coefficients Begs the following: Can we measure the “compressive” measurement set directly? A: yes.

  5. Introduction to Compressive Sensing • Traditional (Nyquist) sampling is highly pessimistic • Doesn’t consider any structure of signal • Compressive sensing is optimistic • leverages compressibility • => only need K<<N measurements to reconstruct an N -dim signal • Intuition: • CS encodes sparsity as information • Allows for tradeoff between sparsity and # of measurements

  6. Compressive Sensing: f ( x ) f N ( x )

  7. Compressive Sensing: f ( x ) f N ( x )

  8. Sequential CS algorithm (segue) Given seed measurement matrix X = ⇒ y = X f 1. Choose new row x ⋆ randomly 2. Form: X ′ = [ X x ⋆ ] T 3. Measure: y ′ = X ′ f c = arg min c {|| c || ℓ 1 | y ′ = X ′ Ψ T c } , 4. Reconstruct: ˆ where f N = � N c i ψ i i =1 ˆ 5. Repeat, starting with X ′ Goal of “CS and Bayesian Experimental Design”: Improve Sequential CS by • Optimizing step (1) above for general distributions • Optimizing step (4) above for natural images

  9. CS and BED: how to optimize How to make these optimizations: • let f be the signal of interest, f N the reconstruction • let y be the measurements, X the measurement matrix • We seek p ( f N | y ) p ( f N | y ) ∝ p ( y | f N ) p ( f N ) ≈ N ( y = X f | X f N , σ 2 I ) p ( f N ) • p ( f N ) encodes structural information about the signal: sparsity, smoothness, etc —Generalizes the ℓ 1 minimization of CS • N ( y = X f | X f N , σ 2 I ) is the likelihood —Generalizes the y = X f N constraint

  10. CS and BED: how to choose next measurement How to choose the next measurement y ∗ = x ∗ f ? Maximize entropy decrease (or information gain): min y ∗ H [ p ( f N | y )] − H [ p ( f N | y , y ∗ )] However, p ( f N | y ) intractable; approximate using Expectation Propagation Q ( f N ) ≈ p ( f N | y ) EP provides us with the following equation for the entropy di ff erence: H [ Q ( X )] − H [ Q ([ X x ∗ ] T )] = 1 2 log(1 + σ − 2 x T ∗ Cov Q ( X ) ( f ) x ∗ ) We thus choose x ∗ along the principal eigendirection of Cov Q ( x ) ( f )

  11. CS and BED: how to encode constraints For images, we have two types of constraints on p ( f N ) • Sparsity (wavelet): B ( sp ) ∈ R n × n is a wavelet transform • Spatial Smoothness: B ( sp ) ∈ R 2( n −√ n ) × n is an image gradient transform We turn these constraints into a distribution by using exponentials: p ( f N ) ∝ exp( − τ sp || B ( sp ) f N || ℓ 1 ) · exp( − τ tv || B ( tv ) f N || ℓ 1 ) q 1 q 2 � exp( − τ sp | ( B ( sp ) f N ) i | ) � exp( − τ tv | ( B ( tv ) f N ) j | ) = i =1 j = g 1 The exponentials favor coe ffi cients near zero, thus enforcing sparsity in both domains

  12. CS and BED: synthetic experimental results Title = type of signal What CS is made for

  13. CS and BED: image experimental results

  14. CS and BED: discussion • Sequential Design outperforms CS protocols • However, measurement matrix of CS known in advance => much faster • BED encompasses CS • Much can be gained from the BED framework • enables encoding of many types of structural information • Optimizes information capture

Recommend


More recommend