linear quantization by
play

Linear Quantization by Yining Wang Effective Resistance Sampling - PowerPoint PPT Presentation

ICASSP 2018, Calgary, Canada Linear Quantization by Yining Wang Effective Resistance Sampling Carnegie Mellon University Joint work with Aarti Singh Q UANTIZED LINEAR SENSING The linear model: y = X 0 X : n by p design matrix,


  1. ICASSP 2018, Calgary, Canada Linear Quantization by Yining Wang Effective Resistance Sampling Carnegie Mellon University Joint work with Aarti Singh

  2. Q UANTIZED LINEAR SENSING ❖ The linear model: y = X β 0 ✴ X : n by p “design” matrix, with full knowledge ✴ y : n -dim vector, the sensing result ✴ β 0 : p- dim unknown signal to be recovered

  3. Q UANTIZED LINEAR SENSING ❖ The linear model: y = X β 0 ❖ The quantized sensing problem: ✴ Measurements of y cannot be made in arbitrary precision ✴ A total of k bits allocated to each measurement y i ✴ Each y i rounded to the nearest integer with k i binary bits. h i 2 k i − 1 y i y i = 2 − ( k i − 1) · round e M

  4. Q UANTIZED LINEAR SENSING ❖ The linear model: y = X β 0 ❖ The quantized sensing problem: ✴ Measurements of y cannot be made in arbitrary precision ✴ A total of k bits allocated to each measurement y i ✴ Each y i rounded to the nearest integer with k i binary bits. h i 2 k i − 1 y i y i = 2 − ( k i − 1) · round e M Range of y

  5. Q UANTIZED LINEAR SENSING ❖ The linear model: y = X β 0 ❖ The quantized sensing problem: ✴ Measurements of y cannot be made in arbitrary precision ❖ Example applications: ✴ Brain activity measurements: total signal strength limited ✴ Distributed sensing: signal communication limited

  6. Q UANTIZED LINEAR SENSING ❖ The linear model: y = X β 0 ❖ The quantized sensing problem: h i 2 k i − 1 y i y i = 2 − ( k i − 1) · round e M ❖ Question: how to allocate measurement bits to achieve the best statistical efficiency?

  7. D ITHERING h 2 k i − 1 ⇣ y i ⌘i ❖ “Dithering”: y i = 2 − ( k i − 1) · round e M + δ i ✴ Introducing artificial noise for independent statistical error y i = h x i , β 0 i + ε i e ✴ Equivalent model: E [ ε 2 i ] ≤ 4 − ( k i +1) M 2 E [ ε i ] = 0

  8. D ITHERING Uniform noise between two values h 2 k i − 1 ⇣ y i ⌘i ❖ “Dithering”: y i = 2 − ( k i − 1) · round e M + δ i ✴ Introducing artificial noise for independent statistical error y i = h x i , β 0 i + ε i e ✴ Equivalent model: E [ ε 2 i ] ≤ 4 − ( k i +1) M 2 E [ ε i ] = 0

  9. W EIGHTED OLS h 2 k i − 1 ⇣ y i ⌘i ❖ “Dithering”: y i = 2 − ( k i − 1) · round e M + δ i E [ ε 2 i ] ≤ 4 − ( k i +1) M 2 = h x i , β 0 i + ε i ❖ Weighted Ordinary Least Squares (OLS) b β k = ( X > WX ) � 1 X > W e y

  10. W EIGHTED OLS h 2 k i − 1 ⇣ y i ⌘i ❖ “Dithering”: y i = 2 − ( k i − 1) · round e M + δ i E [ ε 2 i ] ≤ 4 − ( k i +1) M 2 = h x i , β 0 i + ε i ❖ Weighted Ordinary Least Squares (OLS) b β k = ( X > WX ) � 1 X > W e y W = diag( w 1 , w 2 , · · · , w n ) = diag(4 k 1 +1 , 4 k 2 +1 , · · · , 4 k n +1 )

  11. W EIGHTED OLS ❖ Weighted Ordinary Least Squares (OLS) b β k = ( X > WX ) � 1 X > W e y " n # � 1 X E k b 4 k i +1 x i x > β k � β 0 k 2 2  M 2 · tr i i =1 ❖ Optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N

  12. W EIGHTED OLS ❖ Weighted Ordinary Least Squares (OLS) b β k = ( X > WX ) � 1 X > W e X > WX y " n # � 1 X E k b 4 k i +1 x i x > β k � β 0 k 2 2  M 2 · tr i i =1 ❖ Optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N

  13. W EIGHTED OLS ❖ Weighted Ordinary Least Squares (OLS) b β k = ( X > WX ) � 1 X > W e X > WX y " n # � 1 X E k b 4 k i +1 x i x > β k � β 0 k 2 2  M 2 · tr i i =1 ❖ Optimal quantization: Combinatorial… hard! k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N

  14. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N ❖ Still a challenging problem… ✴ Non-convexity of objectives!

  15. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ Still a challenging problem… ✴ Non-convexity of objectives!

  16. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: " n # � 1 n X X 4 k i +1 x i x > s.t. k i ≤ k min tr i i =1 i =1 " n # � 1 n X X w i x i x > min tr log 4 ( w i ) − 1 ≤ k s.t. i i =1 i =1

  17. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: " n # � 1 n X X 4 k i +1 x i x > s.t. k i ≤ k min tr i i =1 i =1 Convex objective " n # � 1 n X X w i x i x > min tr log 4 ( w i ) − 1 ≤ k s.t. i i =1 i =1

  18. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: " n # � 1 n X X 4 k i +1 x i x > s.t. k i ≤ k min tr i i =1 i =1 Convex objective Non-convex feasible set " n # � 1 n X X w i x i x > min tr log 4 ( w i ) − 1 ≤ k s.t. i i =1 i =1

  19. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: " n # � 1 n X X w i x i x > min tr log 4 ( w i ) − 1 ≤ k s.t. i i =1 i =1 " n " n # � 1 # X X w i x i x > min tr + λ log 4 ( w i ) − ( n − k ) i i =1 i =1

  20. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: " n # � 1 n X X w i x i x > min tr log 4 ( w i ) − 1 ≤ k s.t. i i =1 i =1 Convex objective " n " n # � 1 # X X w i x i x > min tr + λ log 4 ( w i ) − ( n − k ) i i =1 i =1

  21. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: " n # � 1 n X X w i x i x > min tr log 4 ( w i ) − 1 ≤ k s.t. i i =1 i =1 Convex objective concave objective " n " n # � 1 # X X w i x i x > min tr + λ log 4 ( w i ) − ( n − k ) i i =1 i =1

  22. C ONTINUOUS RELAXATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ A re-formulation: ✴ DC (Difference of Convex functions) programming: " n # � 1 " # n X X w i x i x > min tr log 4 ( w i ) + ( n − k ) − λ − i i =1 i =1

  23. R OUNDING / S PARSIFICATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ How to obtain integral solutions? “Sparsify” k ✴ Idea 1: round to the nearest integer ✴ Problem: might cause objective to increase significantly

  24. R OUNDING / S PARSIFICATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ How to obtain integral solutions? “Sparsify” k ✴ Idea 2: simple sampling Sample i from the distribution normalized by k ❖ k(i) = k(i) + 1 ❖ ✴ Problem: slow convergence (require large budget k )

  25. R OUNDING / S PARSIFICATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ How to obtain integral solutions? “Sparsify” k ✴ Idea 3: effective resistance sampling t ∼ p t ∝ 4 k t +1 ` t ✴ Advantage: fast convergence ( k independent of condition numbers of X or W .

  26. R OUNDING / S PARSIFICATION ❖ Continuously relaxed optimal quantization: k tr[ X > WX ] � 1 min s.t. k 1 + · · · + k n ≤ k, k i ∈ N k i ∈ R + ❖ How to obtain integral solutions? “Sparsify” k ✴ Idea 3: effective resistance sampling Effective resistance: t ∼ p t ∝ 4 k t +1 ` t ` t = x > t [ W ⇤ ] � 1 x t ✴ Advantage: fast convergence ( k independent of condition numbers of X or W .

  27. O PEN QUESTIONS ❖ Most important question: how to solve (continuous) " n # � 1 n X X 4 k i +1 x i x > min tr s.t. k i ≤ k i i =1 i =1 ❖ Some ideas: ✴ Is the objective quasi-convex or directional convex? ✴ Are local minima also global, or approximately global? Escaping saddle point methods? ❖ ✴ Are there adequate convex relaxations?

  28. Thank you! Questions

Recommend


More recommend