weighted superimposed codes and constrained compressed
play

Weighted Superimposed Codes and Constrained Compressed Sensing Wei - PowerPoint PPT Presentation

Weighted Superimposed Codes and Constrained Compressed Sensing Wei Dai (ECE UIUC) Joint work with Olgica Milenkovic (ECE UIUC) University of Illinois at Urbana-Champaign DIMACS 2009 Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS


  1. Weighted Superimposed Codes and Constrained Compressed Sensing Wei Dai (ECE UIUC) Joint work with Olgica Milenkovic (ECE UIUC) University of Illinois at Urbana-Champaign DIMACS 2009 Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 1 / 15

  2. Compressed Sensing Classic setup Kashin, 1977; Bresler et al., 1999; Donoho et al., 2004; Candés et al., 2005; · · · Only one constraint ◮ x ∈ R N is K -sparse Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 2 / 15

  3. Constrained Compressed Sensing Constraints on x ◮ x i ’s are correlated (Dai & Milenkovic; Baraniuk, et al.; · · · ) . ◮ x i are bounded integers. ◮ May improve performance. Constraints on Φ ◮ Sparse/structured (Dai & Milenkovic; Indyk, et al.; Do, et al.; Strauss, et al.) . ◮ l p -norm + nonnegativity. ◮ May introduce performance loss. Performance requirement on noise tolerance. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 3 / 15

  4. Application 1: CS DNA Microarrays DNA Microarray: measures the concentration of certain molecules (such as mRNA) for tens of thousands of genes simultaneously. Major issue: each sequence has a unique identifier ⇒ high cost. CS DNA Microarray (Dai, Sheikh, Milenkovic and Baraniuk; Hassibi) Constraints: x : x i = the # of certain molecules. | x i | ≤ t : Bounded integer. Φ : Φ i,j = the affinity (the probability) between the probe and target. � Φ i � l 1 = 1 , Φ i,j ≥ 0 . The same model works for low light imaging, drug screening · · · Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 4 / 15

  5. Application 2: Multiuser Communications A multi-access channel with K users √ P i t i + e . y = � K i =1 h i t i ∈ C i C i : i th user’s codebook |C i | = n i Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 5 / 15

  6. Application 2: Multiuser Communications A multi-access channel with K users √ P i t i + e . y = � K i =1 h i t i ∈ C i C i : i th user’s codebook |C i | = n i Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 5 / 15

  7. Questions regarding to Constrained CS (CCS) How to analyze the gain/loss for a given set of constraints? How do the constraints affect the reconstruction algorithms? Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 6 / 15

  8. Questions regarding to Constrained CS (CCS) How to analyze the gain/loss for a given set of constraints? How do the constraints affect the reconstruction algorithms? Our Observation: coding theoretic techniques help. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 6 / 15

  9. Superimposed Codes Euclidean Superimposed Codes (Ericson and Györfi, 1988) ◮ x i = 0 / 1 . ◮ � v i � 2 = 1 . ◮ Distance requirement ⇒ deterministic noise tolerance. � Φ ( x 1 − x 2 ) � 2 ≥ d ∀ x 1 � = x 2 Applications ⇒ Weighted superimposed codes (WSC) (D. and Milenkovic, 2008) ◮ | x i | ≤ t is an integer. ◮ � v i � p = 1 . ◮ Distance requirement � Φ ( x 1 − x 2 ) � p ≥ d ∀ x 1 � = x 2 . A hybrid of CS and Euclidean superimposed codes Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 7 / 15

  10. Superimposed Codes Euclidean Superimposed Codes (Ericson and Györfi, 1988) ◮ x i = 0 / 1 . ◮ � v i � 2 = 1 . ◮ Distance requirement ⇒ deterministic noise tolerance. � Φ ( x 1 − x 2 ) � 2 ≥ d ∀ x 1 � = x 2 Applications ⇒ Weighted superimposed codes (WSC) (D. and Milenkovic, 2008) ◮ | x i | ≤ t is an integer. ◮ � v i � p = 1 . ◮ Distance requirement � Φ ( x 1 − x 2 ) � p ≥ d ∀ x 1 � = x 2 . A hybrid of CS and Euclidean superimposed codes Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 7 / 15

  11. Rate Bounds for WSCs Definition: Let N ( m, K, d, t ) = max { N : ∃C} . The asymptotic code rate is defined as log N ( m,K,d,t ) R ( K, d, t ) = lim sup . m m →∞ Theorem: For Euclidean norm, log K 4 K (1 + o (1)) ≤ R ( K, d, t ) ≤ log K 2 K (1 + o t,d (1)) . For l 1 -WSC and nonnegative l 1 -WSC log K 4 K (1 + o (1)) ≤ R ( K, d, t ) ≤ log K (1 + o t,d (1)) . K Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 8 / 15

  12. Rate Bounds for WSCs Definition: Let N ( m, K, d, t ) = max { N : ∃C} . The asymptotic code rate is defined as log N ( m,K,d,t ) R ( K, d, t ) = lim sup . m m →∞ Theorem: For Euclidean norm, log K 4 K (1 + o (1)) ≤ R ( K, d, t ) ≤ log K 2 K (1 + o t,d (1)) . For l 1 -WSC and nonnegative l 1 -WSC log K 4 K (1 + o (1)) ≤ R ( K, d, t ) ≤ log K (1 + o t,d (1)) . K Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 8 / 15

  13. Interpretation For WSCs, K log N ≤ m ≤ 4 K log N . log K log K The bounds are not independent of d ⇒ can make the distance arbitrarily close to one. For classic CS, � � N �� m ≥ O K log . K No performance garantee under noise. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 9 / 15

  14. The Proof of the Upper Bound Low-hanging fruit: sphere-packing bound: Φx , d � � Minimum distance d ⇒ Balls B are disjoint 2 � m K � tK + d � N � log N ≤ log K (2 t ) k ≤ � 2 ⇒ . k d m K 2 k =1 High-hanging fruit: a large fraction of balls lie in the sphere of a smaller radius. √ log N ≤ log K = log K 2 K . m K Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 10 / 15

  15. The Proof of the Upper Bound Low-hanging fruit: sphere-packing bound: Φx , d � � Minimum distance d ⇒ Balls B are disjoint 2 � m K � tK + d � N � log N ≤ log K (2 t ) k ≤ � 2 ⇒ . k d m K 2 k =1 High-hanging fruit: a large fraction of balls lie in the sphere of a smaller radius. √ log N ≤ log K = log K 2 K . m K Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 10 / 15

  16. Proof of the Lower Bounds: Random Coding Random codes: H ∈ R m × N = a Gaussian random matrix ( H i,j ∼ N 0 , 1 � � ). m Φ : v i = h i / � h i � p . d ≤ � ∆ y � p = � Φ · ( x 1 − x 2 ) � p . (∆ y ) i ≈ Linear combination of Gaussian rvs. l p -norm of a Gaussian vector: large deviations. log N ≥ log K R ( K, d, t ) = lim sup 4 K (1 + o (1)) . m ( m,N ) →∞ Difficulty with nonnegativity. Gaussian approximation. The Berry-Esseen theorem for bounding the approx. error. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 11 / 15

  17. Proof of the Lower Bounds: Random Coding Random codes: H ∈ R m × N = a Gaussian random matrix ( H i,j ∼ N 0 , 1 � � ). m Φ : v i = h i / � h i � p . d ≤ � ∆ y � p = � Φ · ( x 1 − x 2 ) � p . (∆ y ) i ≈ Linear combination of Gaussian rvs. l p -norm of a Gaussian vector: large deviations. log N ≥ log K R ( K, d, t ) = lim sup 4 K (1 + o (1)) . m ( m,N ) →∞ Difficulty with nonnegativity. Gaussian approximation. The Berry-Esseen theorem for bounding the approx. error. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 11 / 15

  18. Proof of the Lower Bounds: Random Coding Random codes: H ∈ R m × N = a Gaussian random matrix ( H i,j ∼ N 0 , 1 � � ). m Φ : v i = h i / � h i � p . d ≤ � ∆ y � p = � Φ · ( x 1 − x 2 ) � p . (∆ y ) i ≈ Linear combination of Gaussian rvs. l p -norm of a Gaussian vector: large deviations. log N ≥ log K R ( K, d, t ) = lim sup 4 K (1 + o (1)) . m ( m,N ) →∞ Difficulty with nonnegativity. Gaussian approximation. The Berry-Esseen theorem for bounding the approx. error. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 11 / 15

  19. Proof of the Lower Bounds: Random Coding Random codes: H ∈ R m × N = a Gaussian random matrix ( H i,j ∼ N 0 , 1 � � ). m Φ : v i = h i / � h i � p . d ≤ � ∆ y � p = � Φ · ( x 1 − x 2 ) � p . (∆ y ) i ≈ Linear combination of Gaussian rvs. l p -norm of a Gaussian vector: large deviations. log N ≥ log K R ( K, d, t ) = lim sup 4 K (1 + o (1)) . m ( m,N ) →∞ Difficulty with nonnegativity. Gaussian approximation. The Berry-Esseen theorem for bounding the approx. error. Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 11 / 15

  20. Code Construction and Decoding Algorithms Coding theory: ◮ Offers myriad of construction techniques. ◮ No efficient decoding methods for WSC codes were known before. CS: ◮ Offers decoding algorithmic solutions l 1 -minimization, OMP , SP , CoSaMP ... Combination? Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 12 / 15

  21. Decoding The WESC decoder: ˆ x i = round ( v ∗ i y ) . no iteration. OMP: K iterations. Discrete input ⇒ complexity reduction The WESC decoder: O ( mN ) OMP: O ( KmN ) Code Rate for both WESC decoder and OMP: 1 K 2 log N � � R ≤ ⇒ m = O . 8 K 2 t 2 Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 13 / 15

  22. Multiuser Interference Cancellation and Decoding High mobility ⇒ No channel information at transmitters. Coding and decoding motivated by CS. m=128, N=256, K=16: number of realization=1000 1 ML decoding + SIC Subspace based decoding 0.9 0.8 0.7 Error Probability 0.6 0.5 0.4 0.3 0.2 0.1 0 5 10 15 20 SNR (dB) Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 14 / 15

  23. Conclusion WSCs for constrained CS: Quantified the code rate Noise tolerance Efficient decoding Dai and Milenkovic (UIUC) WSC & Constrained CS DIMACS 2009 15 / 15

Recommend


More recommend