greedy algorithms for joint sparse recovery
play

Greedy Algorithms for Joint Sparse Recovery Jeff Blanchard with - PowerPoint PPT Presentation

Compressed Sensing Rank Awareness Greedy Algorithms Greedy Algorithms for Joint Sparse Recovery Jeff Blanchard with Mike Davies, Michael Cermak, David Hanle, Yirong Jing Grinnell College, Iowa, USA University of Edinburgh, UK Funding


  1. Compressed Sensing Rank Awareness Greedy Algorithms Greedy Algorithms for Joint Sparse Recovery Jeff Blanchard with Mike Davies, Michael Cermak, David Hanle, Yirong Jing Grinnell College, Iowa, USA University of Edinburgh, UK Funding provided by NSF OISE 0854991 and NSF DMS 1112612. CIPMA 2013, Mar del Plata, August 15, 2013 Jeff Blanchard Greedy Algorithms for MMV

  2. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: single measurement vector Measure and recover a k -sparse vector with an m × n matrix: The problem is characterized by three parameters: k < m < n n , the signal length; m , number of inner product measurements; k , the sparsity of the signal. The measurement matrix A is of size m × n . The target vector x ∈ R n is k -sparse, � x � 0 = k . The measurements y ∈ R m where y = Ax . (Highly Underdetermined) Jeff Blanchard Greedy Algorithms for MMV

  3. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: single measurement vector Measure and recover a k -sparse vector with an m × n matrix: The problem is characterized by three parameters: k < m < n n , the signal length; m , number of inner product measurements; k , the sparsity of the signal. The measurement matrix A is of size m × n . The target vector x ∈ R n is k -sparse, � x � 0 = k . The measurements y ∈ R m where y = Ax . (Highly Underdetermined) Jeff Blanchard Greedy Algorithms for MMV

  4. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: single measurement vector Measure and recover a k -sparse vector with an m × n matrix: The problem is characterized by three parameters: k < m < n n , the signal length; m , number of inner product measurements; k , the sparsity of the signal. y = Ax Jeff Blanchard Greedy Algorithms for MMV

  5. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: single measurement vector Measure and recover a k -sparse vector with an m × n matrix: The problem is characterized by three parameters: k ≤ m ≤ n n , the signal length; m , number of inner product measurements; k , the sparsity of the signal. y = Ax = A A y x Jeff Blanchard Greedy Algorithms for MMV

  6. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: multiple measurement vectors Measure and recover r jointly k -sparse vectors with a single m × n measurement matrix. A single measurement matrix A of size m × n . The set of r target vectors { x 1 , . . . , x r } ⊂ R n which are jointly k -sparse . The measurements { y 1 , . . . , y r } ⊂ R m where y i = Ax i . (Still Highly Underdetermined) y 1 = Ax 1 , . . . , y r = Ax r Jeff Blanchard Greedy Algorithms for MMV

  7. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: multiple measurement vectors Measure and recover r jointly k -sparse vectors with a single m × n measurement matrix. A single measurement matrix A of size m × n . The set of r target vectors { x 1 , . . . , x r } ⊂ R n which are jointly k -sparse . The measurements { y 1 , . . . , y r } ⊂ R m where y i = Ax i . (Still Highly Underdetermined) y 1 = Ax 1 , . . . , y r = Ax r … = = A A A A y 1 y r x 1 x r Jeff Blanchard Greedy Algorithms for MMV

  8. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The CS Problem: multiple measurement vectors Measure and recover a n × r k -row-sparse matrix with a m × n measurement matrix. The measurement matrix A is of size m × n . The matrix of r target vectors X = [ x 1 | · · · | x r ] ∈ R n × r is k -row-sparse . The measurements Y = [ y 1 | · · · | y r ] ∈ R m × r where Y = AX . (Still Highly Underdetermined) … = A A … y 1 y 2 y 3 … y r Y x 1 x 2 x 3 … x r X Jeff Blanchard Greedy Algorithms for MMV

  9. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The MMV Problem: incomplete history A highly unfair, incomplete (compressive) sampling of results: Tropp, Gilbert, Strauss: Simultaneous Orthogonal Matching Pursuit and ℓ 1 -minimization, 2006. Foucart: Hard Thresholding Pursuit for MMV problems, 2011. Davies, Eldar: Rank Aware Algorithms, 2012. Many others: primarily focused on relaxations, rank-blind variants of OMP, mixed matrix norm techniques. Jeff Blanchard Greedy Algorithms for MMV

  10. Compressed Sensing The CS Problem Rank Awareness Multiple Measurement Vectors Greedy Algorithms The MMV Problem: this presentation A rank-aware recovery guarantee. Extension of SMV greedy algorithms to the MMV problem. Empirical performance comparison. Totally unrelated plug for something else. Jeff Blanchard Greedy Algorithms for MMV

  11. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms Simultaneous OMP SOMP [Tropp, Gilbert, Strauss] Initialization: X 0 = 0 , T 0 = ∅ , R 0 = Y , for j = 1 ; j = j + 1 ; do i j = arg max i � A ∗ i R j − 1 � 2 1. Max Correlation : T j = T j − 1 ∪ i j 2. New Support : 3. Update Approximation : X j = A † T j Y R j = Y − AX j 4. Update Residual : X = X j ⋆ where j ⋆ is the final completed iteration. Output: ˆ Jeff Blanchard Greedy Algorithms for MMV

  12. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms Rank Aware Simultaneous OMP RA-SOMP [Davies, Eldar] Initialization: X 0 = 0 , T 0 = ∅ , R 0 = Y , for j = 1 ; j = j + 1 ; do compute U j − 1 = ortho ( R j − 1 ) 1. Rank Awareness : i j = arg max i � A ∗ i U j − 1 � 2 2. Max Correlation : T j = T j − 1 ∪ i j 3. New Support : 4. Update Approximation : X j = A † T j Y R j = Y − AX j 5. Update Residual : X = X j ⋆ where j ⋆ is the final completed iteration. Output: ˆ Jeff Blanchard Greedy Algorithms for MMV

  13. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms Preserving Rank Awareness RA-SOMP suffers from rank degeneration of the residual. Two solutions: RA-Order Recursive MP [Davies/Eldar] � A ∗ i U j − 1 � 2 / � P ⊥ i j = arg max Max Correlation: T j − 1 A i � 2 . i RA-SOMP + MUSIC [B./Davies & Lee/Bresler/Junge] Apply RA-SOMP for k − r iterations, then apply MUSIC. Jeff Blanchard Greedy Algorithms for MMV

  14. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms MMV Recovery Guarantees Typical worst case MMV recovery guarantees reduce to the SMV case. Worst case MMV problem: rank ( X ) = 1 x = x 1 = x 2 = · · · = x r so that X = [ x | x | · · · | x ] For A from the Gaussian ensemble (entries drawn iid from N (0 , m − 1 ) ) , SOMP recovers X from Y with high probability provided m � Ck (log( n ) + 1) . Jeff Blanchard Greedy Algorithms for MMV

  15. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms Rank Aware Recovery Guarantees Rank aware algorithms incorporate rank in the analysis: For rank aware algorithms, the rank reduces the logarithmic penalty: Theorem (B.,Davies 2012) Suppose X ∈ R n × r , T = rowsupp ( X ) with | T | = k , rank ( X ) = r < k , and X ( T ) is in general position. If A is drawn from the Gaussian ensemble (independently from X ), then both RA-SOMP+MUSIC and RA-ORMP recover X from Y with high probability provided � log ( n ) � m � Ck + 1 . r When r ∼ log( n ) , the number of required measurements is linearly proportional to the row-sparsity of X . Jeff Blanchard Greedy Algorithms for MMV

  16. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms Rank Aware Recovery Guarantees Rank aware algorithms incorporate rank in the analysis: For rank aware algorithms, the rank reduces the logarithmic penalty: Theorem (B.,Davies 2012) Suppose X ∈ R n × r , T = rowsupp ( X ) with | T | = k , rank ( X ) = r < k , and X ( T ) is in general position. If A is drawn from the Gaussian ensemble (independently from X ), then both RA-SOMP+MUSIC and RA-ORMP recover X from Y with high probability provided � log ( n ) � m � Ck + 1 . r When r ∼ log( n ) , the number of required measurements is linearly proportional to the row-sparsity of X . Jeff Blanchard Greedy Algorithms for MMV

  17. Compressed Sensing Rank Aware Algorithms Rank Awareness Recovery Guarantees Greedy Algorithms Rank Aware Recovery Guarantees Rank aware algorithms incorporate rank in the analysis: For rank aware algorithms, the rank reduces the logarithmic penalty: Theorem (B.,Davies 2012) Suppose X ∈ R n × r , T = rowsupp ( X ) with | T | = k , rank ( X ) = r < k , and X ( T ) is in general position. If A is drawn from the Gaussian ensemble (independently from X ), then both RA-SOMP+MUSIC and RA-ORMP recover X from Y with high probability provided � log ( n ) � m � Ck + 1 . r When r ∼ log( n ) , the number of required measurements is linearly proportional to the row-sparsity of X . Jeff Blanchard Greedy Algorithms for MMV

Recommend


More recommend