Sparsity and image processing Aurélie Boisbunon INRIA-SAM, AYIN March 26, 2014
Why sparsity? Main advantages ◮ Dimensionality reduction ◮ Fast computation ◮ Better interpretability Image processing ◮ pattern recognition ◮ denoising / deblurring ◮ compression ◮ super-resolution ◮ source separation A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 2 / 14
Context and objectives Linear regression Source: [Hastie et al., 2008] x = D ∗ α + ε � (vectorized) image x � � D dictionary � � noise ε � Assumption α is a sparse vector/matrix Source: [Donoho et al., 1995] Dictionary D = { φ j } J j = 1 ◮ Fixed: Fourier basis, Wavelets ◮ Learned A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 3 / 14
Sparse optimization problem � � � x − D α � 2 min 2 + pen ( α ) α | goodness of fit / distortion rate Goodness of fit Measures how close two images are 4 x 10 5 Goodness of fit 4 3 2 1 0 Original Salt & pepper Negative Gaussian S&P Gauss Neg A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 4 / 14
Sparse optimization problem � � � x − D α � 2 min 2 + pen ( α ) α | penalty / regularization Penalty Special case: non-differentiable in zero 1 ⇒ sparse solution ˆ α 4 4 2 2 3.5 3.5 3.5 3.5 1.8 1.8 1.6 1.6 3 1.4 1.4 1.2 1.2 3 1.5 3 2.5 1 1 2.5 2.5 1 2 1 1 0.5 1.5 0.6 0.5 1.8 1.6 1.6 1.8 0.2 β 2 1.4 1.2 0.2 0.4 0.6 0.8 1 1.2 1.4 β 2 0 0 β 2 0 β 2 0 1.2 1.2 1.2 1.6 1.8 1.4 1.2 1.8 1.6 1.4 0.4 1.4 1.4 1.5 1 1.6 0.8 1.6 1 2 − 1 1.8 1.8 −1 2.5 −1 −1 3 2 2 2 2.5 2.5 3 3 3.5 3.5 3.5 3.5 4 4 4.5 4 4 4.5 −1 0 1 −1 0 1 − 1 0 1 −1 0 1 β 1 β 1 β 1 β 1 ℓ 1 /Lasso Reweighted- ℓ 1 ℓ 0 MCP MCP = Minimax Concave Penalty [Zhang, 2010] 1 with 0 belonging to subgradient of pen A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 5 / 14
Sparse optimization problem � � � x − D α � 2 min 2 + pen ( α ) α | penalty / regularization Penalty Special case: non-differentiable in zero 2 ⇒ sparse solution ˆ α 10 10 6 10 4 5 5 5 2 adalasso lasso l0 FS 0 0 0 β j 0 β j β j β j −2 −5 −5 −5 l0 Lasso −4 Adaptive FS LS LS LS LS −10 −10 −6 −10 −10 −5 0 5 10 −10 −5 0 5 10 −6 −4 −2 0 2 4 6 LS LS LS −10 −5 0 5 10 β j β j LS β j β j ℓ 0 /hard threshold ℓ 1 /Soft threshold Reweighted- ℓ 1 MCP MCP = Minimax Concave Penalty [Zhang, 2010] 2 with 0 belonging to subgradient of pen A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 6 / 14
Matching/Basis pursuit Algorithm Start: α = 0 , J = ∅ Repeat 1. Find vector φ j most correlated with residual j ( x − D ( J ) α ( J ) ) | arg max | φ t 2. Add it to the “active set” J ← J ∪ { j } 3. Update the coefficients α ( J ) until stopping rule. A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 7 / 14
Matching/Basis pursuit 5 5 5 5 4 4 4 4 3 3 3 3 β LAR β MCP β Ada β LS 2 2 2 2 1 1 1 1 0 0 0 0 2 4 6 2 4 6 2 4 6 2 4 6 8 10 step step step step ℓ 0 /matching p. ℓ 1 /Basis p. Reweighted- ℓ 1 MCP A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 8 / 14
Applications Compression 3 Original ℓ 1 Reweighted- ℓ 1 3 [Candes et al., 2008] A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 9 / 14
Applications Denoising/Deblurring 4 Original Noisy ℓ 1 (FISTA) 4 [Beck and Teboulle, 2009] A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 10 / 14
Dictionary learning Optimization problem � x − D α � 2 � � min 2 + pen ( α ) α, D Algorithm Start: α = 0 , D 0 Source: [Bach et al., 2011] 1. Extract patches from image 2. Repeat ◮ Solve optimization problem for α with D fixed ◮ Solve optimization problem for D with α fixed until stopping rule. A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 11 / 14
Dictionary learning Applications Inpainting 5 Texture recognition 6 5 [Mairal et al., 2009] 6 [Mairal et al., 2008] A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 12 / 14
Thank you! A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 13 / 14
References Bach, F., Jenatton, R., Mairal, J., and Obozinski, G. (2011). Optimization for Machine Learning , chapter Convex optimization with sparsity-inducing norms, pages 19–54. MIT Press. Beck, A. and Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences , 2(1):183–202. Candes, E. J., Wakin, M. B., and Boyd, S. P. (2008). Enhancing sparsity by reweighted ℓ 1 minimization. Journal of Fourier analysis and applications , 14(5-6):877–905. Donoho, D. L., Buckheit, J. B., Chen, S., Johnstone, I., and Scargle, J. D. (1995). About wavelab . Hastie, T., Tibshirani, R., and Friedman, J. (2008). The Elements of Statistical Learning: Data Mining, Inference and Prediction (2nd Edition) , volume 1. Springer Series in Statistics. Mairal, J., Bach, F., Ponce, J., and Sapiro, G. (2009). Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning , pages 689–696. ACM. Mairal, J., Ponce, J., Sapiro, G., Zisserman, A., and Bach, F. R. (2008). Supervised dictionary learning. In Advances in Neural Information Processing Systems , pages 1033–1040. Zhang, C. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics , 38(2):894–942. A. Boisbunon (AYIN) Sparsity & Image Proc. March 26, 2014 14 / 14
Recommend
More recommend