compressed sensing under optimal quantization
play

Compressed Sensing under Optimal Quantization Alon Kipnis - PowerPoint PPT Presentation

Compressed Sensing under Optimal Quantization Alon Kipnis (Stanford) Galen Reeves (Duke) Yonina Eldar (Technion) Andrea Goldsmith (Stanford) ISIT, June 2017 Table of Contents Introduction Remote Source Coding Compressed Sensing Results


  1. Compressed Sensing under Optimal Quantization Alon Kipnis (Stanford) Galen Reeves (Duke) Yonina Eldar (Technion) Andrea Goldsmith (Stanford) ISIT, June 2017

  2. Table of Contents Introduction Remote Source Coding Compressed Sensing Results Summary 2 / 22

  3. Remote source coding [Dobrushin & Tsybakov ’62] � 1 , . . ., 2 NR � Y � X Channel Enc Dec X 3 / 22

  4. Remote source coding [Dobrushin & Tsybakov ’62] � 1 , . . ., 2 NR � Y � X Channel Enc Dec X � � d ( X , � D X | Y ( R ) = min x | y ) E X ) P (ˆ 3 / 22

  5. Remote source coding [Dobrushin & Tsybakov ’62] � 1 , . . ., 2 NR � Y � X Channel Enc Dec X � � d ( X , � D X | Y ( R ) = min x | y ) E X ) P (ˆ ◮ Estimation under communication constraints ◮ Learning from noisy data ◮ Close connection between inference and compression 3 / 22

  6. Two coding schemes: Estimate-and-compress � 1 , . . ., 2 NR � � Y X ( Y ) � X Channel Est Enc Dec X 4 / 22

  7. Two coding schemes: Estimate-and-compress � 1 , . . ., 2 NR � � Y X ( Y ) � X Channel Est Enc Dec X Compress-and-estimate [Kipnis, Rini, Goldsmith ’16] � 1 , . . ., 2 NR � � Y Y � X Channel Enc Dec Est X 4 / 22

  8. Example: IID source, Gaussian noise, MSE distortion D mmse ( X | Y ) D X ( R ) R

  9. Example: IID source, Gaussian noise, MSE distortion D D X | Y ( R ) mmse ( X | Y ) D X ( R ) R 5 / 22

  10. Compressed sensing with quantization Y = √ snr HX + W , H ∈ R M × N 6 / 22

  11. Compressed sensing with quantization Y = √ snr HX + W , H ∈ R M × N � 1 , . . ., 2 NR � Y Linear � X AWGN Enc Dec X Transform

  12. Compressed sensing with quantization Y = √ snr HX + W , H ∈ R M × N � 1 , . . ., 2 NR � Y Linear � X AWGN Enc Dec X Transform H M × N matrix 6 / 22

  13. Compressed sensing with quantization Y = √ snr HX + W , H ∈ R M × N � 1 , . . ., 2 NR � Y Linear � X AWGN Enc Dec X Transform H M × N matrix Goal is to understand fundamental tradeoffs between ◮ Bitrate R ◮ MSE distortion D ◮ Sampling rate δ = M/N 6 / 22

  14. Related work on quantization ◮ Gaussian sources – Kipnis, Goldsmith, Eldar, Weissman ’16 ◮ Scaler quantization – Goyal, Fletcher, Rangan ’08 ◮ Lasso recovery – Sue, Goyal ’09 ◮ Optimal high-bit asymptotic – Wu, Verdu ’12, Dai, Milenkovic ’11 ◮ 1-bit quantization – Boufounos, Baraniuk ’08, Plan, Vershynin ’13 ◮ Remote source coding with side information – Guler, MolavianJazi, Yener ’15 ◮ Lower bound on optimal quantization – Leinonen, Codreanu, Juntti, Kramer ’16 ◮ Sampling rate distortion – Boda, Narayan ’17 ◮ Distributed coding of multispectral images – Goukhshtein, Boufounos, Koike-Akino, Draper ’17 7 / 22

  15. Fundamental limits of compressed sensing Y = √ snr HX + W , H ∈ R M × N , M, N → ∞ ◮ Guo & Verd´ u 2005 analyze large system limit with IID matrices using heuristic replica method from statistical physics. 8 / 22

  16. Fundamental limits of compressed sensing Y = √ snr HX + W , H ∈ R M × N , M, N → ∞ ◮ Guo & Verd´ u 2005 analyze large system limit with IID matrices using heuristic replica method from statistical physics. ◮ Rigorous results for special cases: Verd´ u & Shamai 1999, Tse & Hanly 1999, Montanari & Tse 2006, Korada & Macris 2010, Bayati & Montanari 20011, R. & Gastpar 2012, Wu & Verdu 2012, Krzakala et al. 2013, Donoho et al. 2013, Huleihel & Merhav 2016 8 / 22

  17. Fundamental limits of compressed sensing Y = √ snr HX + W , H ∈ R M × N , M, N → ∞ ◮ Guo & Verd´ u 2005 analyze large system limit with IID matrices using heuristic replica method from statistical physics. ◮ Rigorous results for special cases: Verd´ u & Shamai 1999, Tse & Hanly 1999, Montanari & Tse 2006, Korada & Macris 2010, Bayati & Montanari 20011, R. & Gastpar 2012, Wu & Verdu 2012, Krzakala et al. 2013, Donoho et al. 2013, Huleihel & Merhav 2016 ◮ R. & Pfister 2016 provide rigorous derivation of mutual information and MMSE limits for Gaussian matrices. Proof uses conditional CLT (see Tomorrow’s talk) 8 / 22

  18. Characterization of limits via decoupling principle Y = √ snr HX + W √ � s ∗ X + � Y = W compressed sensing signal plus noise ◮ Conditional distribution of X given ( Y , H ) is complicated! 9 / 22

  19. Characterization of limits via decoupling principle Y = √ snr HX + W √ � s ∗ X + � Y = W compressed sensing signal plus noise ◮ Conditional distribution of X given ( Y , H ) is complicated! ◮ Conditional distribution of small subsets of X given ( Y , H ) characterized by signal plus noise model, 9 / 22

  20. Characterization of limits via decoupling principle Y = √ snr HX + W √ � s ∗ X + � Y = W compressed sensing signal plus noise ◮ Conditional distribution of X given ( Y , H ) is complicated! ◮ Conditional distribution of small subsets of X given ( Y , H ) characterized by signal plus noise model, i.e. there exists a coupling on ( Y , H , � Y ) such that � Y i ( ·| � P X S | Y , A ( ·| Y , A ) ≈ P X i | � Y i ) i ∈ S 9 / 22

  21. Characterization of limits via decoupling principle Y = √ snr HX + W √ � s ∗ X + � Y = W compressed sensing signal plus noise ◮ Conditional distribution of X given ( Y , H ) is complicated! ◮ Conditional distribution of small subsets of X given ( Y , H ) characterized by signal plus noise model, i.e. there exists a coupling on ( Y , H , � Y ) such that � Y i ( ·| � P X S | Y , A ( ·| Y , A ) ≈ P X i | � Y i ) i ∈ S ◮ Effective SNR given by � � � δ snr � �� � X ; √ sX + W � + δ s s ∗ = arg min I log + δ snr − 1 2 s s 9 / 22

  22. Table of Contents Introduction Remote Source Coding Compressed Sensing Results Summary 10 / 22

  23. Estimate and compress + decoupling R E [ X | Y , H ] Y Linear � X AWGN Est Enc Dec X Transform H 11 / 22

  24. Estimate and compress + decoupling R E [ X | Y , H ] Y Linear � X AWGN Est Enc Dec X Transform H ◮ Idea is to compress conditional expectation using marginal approximation given by signal plus noise model. 11 / 22

  25. Estimate and compress + decoupling R E [ X | Y , H ] Y Linear � X AWGN Est Enc Dec X Transform H ◮ Idea is to compress conditional expectation using marginal approximation given by signal plus noise model. ◮ Encoding and decoding do not depend on matrix 11 / 22

  26. Results Theorem (Achievability via estimate and compress) For every ǫ > 0 , there etxists N large enough and a rate- R quantization scheme such that � � 2 � � � � X − � 1 ≤ D X | √ s ∗ X + W ( R ) + ǫ E X N where s ∗ is defined by ( P X , δ, snr ) . 12 / 22

  27. Results Theorem (Achievability via estimate and compress) For every ǫ > 0 , there etxists N large enough and a rate- R quantization scheme such that � � 2 � � � � X − � 1 ≤ D X | √ s ∗ X + W ( R ) + ǫ E X N where s ∗ is defined by ( P X , δ, snr ) . Theorem (Converse for bounded subsets) For every ǫ > 0 and fixed subset S , there exists N 0 large enough such that for any N ≥ N 0 and any quantization scheme using | S | R bits � � 2 � � � � X S − � 1 √ E X S ≥ D X | s ∗ X + W ( R ) − ǫ | S | where s ∗ is defined by ( P X , δ, snr ) . 12 / 22

  28. Bounds described by single-letter DRF high sampling rate low sampling rate 1.0 0.9 D EC 0.8 D D D EC 0.7 mmse mmse D X 0.6 D X 0.5 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 R R 13 / 22

  29. Are we done? 14 / 22

  30. Compress and estimate + decoupling � R Y Y Linear � X AWGN Enc Dec Est X Transform H 15 / 22

  31. Compress and estimate + decoupling � R Y Y Linear � X AWGN Enc Dec Est X Transform H ◮ First compress measurements using Gaussian quantization ◮ Then estimate signal from reconstructed measurements treating quantization error as additional noise. 15 / 22

  32. Compress and estimate + decoupling � R Y Y Linear � X AWGN Enc Dec Est X Transform H ◮ First compress measurements using Gaussian quantization ◮ Then estimate signal from reconstructed measurements treating quantization error as additional noise. ◮ Encoding and decoding do not depend on matrix 15 / 22

  33. Result Theorem (Achievability via compress-and-estimate) For every ǫ > 0 , there etxists N large enough and a rate- R quantization scheme such that � � 2 � � � � � √ � X − � s ′ X + W 1 E X ≤ mmse X | + ǫ N where s ′ is defined by ( P X , δ, snr ′ ) with snr ′ = snr 1 − 2 − 2 R/δ 1 + snr 2 − 2 R/δ 16 / 22

  34. Comparison of achievability results high sampling rate low sampling rate D EC D CE D CE D D D EC mmse mmse D X D X R R 17 / 22

  35. Comparison of achievability results high sampling rate low sampling rate D EC D CE D CE D D D EC mmse mmse D X D X R R Neither scheme is optimal in general! 17 / 22

  36. Two different quantization schemes Estimate-and-compress (EC) R E [ X | Y , H ] Y Linear � X AWGN Est Enc Dec X Transform H Compress-and-estimate (CE) � Y R Y Linear � X AWGN Enc Dec Est X Transform H 18 / 22

Recommend


More recommend