wiener process to bits and back
play

Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint - PowerPoint PPT Presentation

Wiener Process to Bits (and back) Alon Kipnis (Stanford) Joint work with Yonina Eldar (Technion) and Andrea Goldsmith (Stanford) November 2017 1 /17 Overview Wiener c t 0 W t W t , t 0 process , uniform reconstruct / sampling


  1. Background: Unconstrained Coding of a Wiener Process 2 π 2 ln 2 R − 1 D W ( R ) = distortion-rate function [Berger 1970] find coefficients in the Karhunen–Loève of W t Z T A k = f k ( t ) dW t k = 1 , 2 , . . . 0 encode the coefficients using a standard random coding principle [Shannon] requires integration with respect to Brownian path in practice, imprecise at any timescale 😖 − − 💢 this talk: incorporate sampling into model − − 0.3 − 0.2 − 0.1 0 0.1 0.2 6 /17

  2. Background: Unconstrained Coding of a Wiener Process 2 π 2 ln 2 R − 1 D W ( R ) = distortion-rate function [Berger 1970] find coefficients in the Karhunen–Loève of W t Z T A k = f k ( t ) dW t k = 1 , 2 , . . . 0 encode the coefficients using a standard random coding principle [Shannon] requires integration with respect to Brownian path in practice, imprecise at any timescale 😖 − − 💢 this talk: incorporate sampling into model − − 0.3 − 0.2 − 0.1 0 0.1 0.2 6 /17

  3. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s 7 /17

  4. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s Z T ⇣ ⌘ 2 1 W t − c inf lim D ( f s , R ) W t dt = T →∞ W T → M → c T ¯ W T 0 7 /17

  5. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s Z T ⇣ ⌘ 2 = ❓ 1 W t − c inf lim D ( f s , R ) W t dt = T →∞ W T → M → c T ¯ W T 0 7 /17

  6. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s Z T ⇣ ⌘ 2 = ❓ 1 W t − c inf lim D ( f s , R ) W t dt = T →∞ W T → M → c T ¯ W T 0 (R = 1) MSE f s [smp/sec] 7 /17

  7. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s Z T ⇣ ⌘ 2 = ❓ 1 W t − c inf lim D ( f s , R ) W t dt = T →∞ W T → M → c T ¯ W T 0 (R = 1) MSE D W ( R ) 2 π 2 ln 2 f s [smp/sec] 7 /17

  8. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s Z T ⇣ ⌘ 2 = ❓ 1 W t − c inf lim D ( f s , R ) W t dt = T →∞ W T → M → c T ¯ W T 0 (R = 1) MSE mmse ( f s ) = (6 f s ) − 1 D W ( R ) 2 π 2 ln 2 f s [smp/sec] 7 /17

  9. Combined Sampling and Coding W T ¯ M ∈ { 0 , 1 } b T R c W T c W T decoder encoder = T − 1 f s s Z T ⇣ ⌘ 2 = ❓ 1 W t − c inf lim D ( f s , R ) W t dt = T →∞ W T → M → c T ¯ W T 0 (R = 1) MSE D ( f , R = ? ) s mmse ( f s ) = (6 f s ) − 1 D W ( R ) 2 π 2 ln 2 f s [smp/sec] 7 /17

  10. Main Result: Minimal Distortion under Sampling and Coding Theorem [K., Goldsmith, Eldar, ‘16] Z 1 1 + 1 � D ( f s , R ) = min W ( φ ) , θ S f d φ 6 f s f s 0 Z 1 R θ = f s log + ⇥ ⇤ S f W ( φ ) / θ d φ 2 0 (2 sin( πφ / 2)) 2 − 1 1 S f W ( φ ) = 6 8 /17

  11. Main Result: Minimal Distortion under Sampling and Coding Theorem [K., Goldsmith, Eldar, ‘16] Z 1 1 + 1 � D ( f s , R ) = min W ( φ ) , θ S f d φ 6 f s f s 0 Z 1 R θ = f s log + ⇥ ⇤ S f W ( φ ) / θ d φ 2 0 (2 sin( πφ / 2)) 2 − 1 1 S f W ( φ ) = 6 θ W ( φ ) S f φ 1 8 /17

  12. Main Result: Minimal Distortion under Sampling and Coding Theorem [K., Goldsmith, Eldar, ‘16] Z 1 1 + 1 � D ( f s , R ) = min W ( φ ) , θ S f d φ 6 f s f s 0 Z 1 R θ = f s log + ⇥ ⇤ S f W ( φ ) / θ d φ 2 0 (2 sin( πφ / 2)) 2 − 1 1 S f W ( φ ) = 6 asymptotic density of θ Karhunen Loeve eigenvalues of W ( φ ) S f W t = E [ W t | ¯ f W ] φ 1 8 /17

  13. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem 9 /17

  14. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: 9 /17

  15. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: d : R b T f s c × L 2 [0 , T ] → [0 , ∞ ) " # Z T w T ) def w T , b 1 w t ) 2 | ¯ W T = ¯ d ( ¯ w T ( W t − b E = T 0 9 /17

  16. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: d : R b T f s c × L 2 [0 , T ] → [0 , ∞ ) w T w T b ¯ " # Z T w T ) def w T , b 1 w t ) 2 | ¯ W T = ¯ d ( ¯ w T ( W t − b E = T 0 T 0 9 /17

  17. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: d : R b T f s c × L 2 [0 , T ] → [0 , ∞ ) w T w T b ¯ " # Z T w T ) def w T , b 1 w t ) 2 | ¯ W T = ¯ d ( ¯ w T ( W t − b E = T 0 T 0 9 /17

  18. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: d : R b T f s c × L 2 [0 , T ] → [0 , ∞ ) w T w T b ¯ " # Z T w T ) def w T , b 1 w t ) 2 | ¯ W T = ¯ d ( ¯ w T ( W t − b E = T 0 T 0 9 /17

  19. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: d : R b T f s c × L 2 [0 , T ] → [0 , ∞ ) w T w T b ¯ " # Z T w T ) def w T , b 1 w t ) 2 | ¯ W T = ¯ d ( ¯ w T ( W t − b E = T 0 Z T ⇣ ⌘ 2 T 0 W T ) = 1 W T , c E d ( ¯ W t − c W t dt T 0 9 /17

  20. Minimal Distortion under Sampling and Coding — proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step I: d : R b T f s c × L 2 [0 , T ] → [0 , ∞ ) w T w T b ¯ " # Z T w T ) def w T , b 1 w t ) 2 | ¯ W T = ¯ d ( ¯ w T ( W t − b E = T 0 Z T ⇣ ⌘ 2 T 0 W T ) = 1 W T , c E d ( ¯ W t − c W t dt T 0 use standard random coding [Shannon] with respect to samples ¯ W n under metric d (rather than squared error) 9 /17

  21. Minimal Distortion under Sampling and Coding — Proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem 10 /17

  22. Minimal Distortion under Sampling and Coding — Proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step II: 10 /17

  23. Minimal Distortion under Sampling and Coding — Proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step II: Z T ⇣ ⌘ 2 inf 1 mmse ( W T | ¯ W T ) + W t − c f D ( f s , R ) = lim E W t dt T T →∞ ⇣ W T ⌘ 0 W T ; c f I ≤ RT 10 /17

  24. Minimal Distortion under Sampling and Coding — Proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step II: Z T ⇣ ⌘ 2 inf 1 mmse ( W T | ¯ W T ) + W t − c f D ( f s , R ) = lim E W t dt T T →∞ ⇣ W T ⌘ 0 W T ; c f I ≤ RT f use Karhunen–Loève transform of to evaluate last minimization W t Z T h i W t f f k = 1 , 2 , . . . λ k f k ( t ) = f k ( s ) E W s ds, 0 10 /17

  25. Minimal Distortion under Sampling and Coding — Proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step II: Z T ⇣ ⌘ 2 inf 1 mmse ( W T | ¯ W T ) + W t − c f D ( f s , R ) = lim E W t dt T T →∞ ⇣ W T ⌘ 0 W T ; c f I ≤ RT f use Karhunen–Loève transform of to evaluate last minimization W t Z T h i W t f f k = 1 , 2 , . . . λ k f k ( t ) = f k ( s ) E W s ds, 0 λ 1 , . . . , λ b T f s c f W T covariance Kernel of has rank . Can ``guess’’ eigenvalues b Tf s c 10 /17

  26. Minimal Distortion under Sampling and Coding — Proof steps in proof: Z T ⇣ ⌘ 2 1 W t − c D ( f s , R ) inf E W t dt show that: I. lim = ⇣ W T ⌘ T T →∞ W T ; c 0 ¯ I ≤ RT II. compute solution to optimization problem Step II: Z T ⇣ ⌘ 2 inf 1 mmse ( W T | ¯ W T ) + W t − c f D ( f s , R ) = lim E W t dt T T →∞ ⇣ W T ⌘ 0 W T ; c f I ≤ RT f use Karhunen–Loève transform of to evaluate last minimization W t Z T h i W t f f k = 1 , 2 , . . . λ k f k ( t ) = f k ( s ) E W s ds, 0 λ 1 , . . . , λ b T f s c f W T covariance Kernel of has rank . Can ``guess’’ eigenvalues b Tf s c (2 sin( πφ / 2)) 2 − 1 1 S f W ( φ ) = asymptotic KL eigenvalues distribution is 6 10 /17

  27. Analysis 11 /17

  28. Analysis (R = 1) MSE D W ( R ) 2 π 2 ln 2 mmse ( f s ) f s [smp/sec] 11 /17

  29. Analysis (R = 1) MSE D ( f , R ) s D W ( R ) 2 π 2 ln 2 mmse ( f s ) f s [smp/sec] 11 /17

  30. Analysis ( f s = 1) (R = 1) MSE MSE D ( f , R ) s mmse ( f s ) 1 D W ( R ) 2 6 f s π 2 ln 2 D W ( R ) mmse ( f s ) f s [smp/sec] R [bits/sec] 11 /17

  31. Analysis ( f s = 1) (R = 1) MSE MSE D ( f s , R ) D ( f , R ) s mmse ( f s ) 1 D W ( R ) 2 6 f s π 2 ln 2 D W ( R ) mmse ( f s ) f s [smp/sec] R [bits/sec] 11 /17

  32. Analysis ( f s = 1) (R = 1) MSE MSE D ( f s , R ) D ( f , R ) s mmse ( f s ) 1 D W ( R ) 2 6 f s π 2 ln 2 D W ( R ) mmse ( f s ) f s [smp/sec] R [bits/sec] √ ≥ 1 + log( 3 + 2) R low sampling rate ≈ 1 . 45 2 f s ! √ 1 1 6 + 2 + 3 2 − 2 R/f s D ( f s , R ) = f s 6 11 /17

  33. Analysis ( f s = 1) (R = 1) MSE MSE D ( f s , R ) D ( f , R ) s mmse ( f s ) 1 D W ( R ) 2 6 f s π 2 ln 2 D W ( R ) mmse ( f s ) f s [smp/sec] R [bits/sec] √ ≥ 1 + log( 3 + 2) R low sampling rate ≈ 1 . 45 2 f s ! √ 1 6 + 2 + 1 3 2 − 2 R/f s D ( f s , R ) = f s 6 √ = 18 + 3 D ( f s = 1 , R = 2) 96 11 /17

  34. Excess Distortion due to Sampling D ( f s , R ) ρ ( ¯ ¯ (bits per sample) R = R/f s R ) excess distortion ratio: = D W ( R ) 12 /17

  35. Excess Distortion due to Sampling D ( f s , R ) ρ ( ¯ ¯ (bits per sample) R = R/f s R ) excess distortion ratio: = D W ( R ) excess distortion due to sampling is only a function of bits/smp 12 /17

  36. Excess Distortion due to Sampling D ( f s , R ) ρ ( ¯ ¯ (bits per sample) R = R/f s R ) excess distortion ratio: = D W ( R ) excess distortion due to sampling is only a function of bits/smp ) ¯ R ( ρ ¯ R [bit/smp] 1 12 /17

  37. Excess Distortion due to Sampling D ( f s , R ) ρ ( ¯ ¯ (bits per sample) R = R/f s R ) excess distortion ratio: = D W ( R ) excess distortion due to sampling is only a function of bits/smp ) ¯ R ( ρ ≈ 1 . 12 ¯ R [bit/smp] 1 ¯ R = 1 example : with 1 bit/smp can attain 1.12 of optimal distortion at the same bitrate 12 /17

  38. Excess Distortion due to Sampling D ( f s , R ) ρ ( ¯ ¯ (bits per sample) R = R/f s R ) excess distortion ratio: = D W ( R ) excess distortion due to sampling is only a function of bits/smp ) ¯ R ( ρ ≈ 1 . 12 ¯ R [bit/smp] 1 ¯ R = 1 example : with 1 bit/smp can attain 1.12 of optimal distortion at the same bitrate ¯ ρ → 1 must have to get R → 0 12 /17

  39. Real Stationary Gaussian Processes encode sample bitrate R decode ¯ X t b X n X t representation 13 /17

  40. Real Stationary Gaussian Processes encode sample bitrate R decode ¯ X t b X n X t representation S X ( f ) = F { E X t X 0 } ( f ) 13 /17

  41. Real Stationary Gaussian Processes encode sample bitrate R decode ¯ X t b X n X t representation S X ( f ) = F { E X t X 0 } ( f ) Theorem [K., Goldsmith, Eldar, ‘14] fs Z 2 mmse ( f s ) + D X ( f s , R ) min { S X ( f ) , θ } d f = − fs 2 fs R θ = 1 Z 2 log + [ S X ( f ) / θ ] d f 2 − fs 2 13 /17

  42. Real Stationary Gaussian Processes encode sample bitrate R decode ¯ X t b X n X t representation S X ( f ) = F { E X t X 0 } ( f ) Theorem [K., Goldsmith, Eldar, ‘14] fs Z 2 mmse ( f s ) + D X ( f s , R ) min { S X ( f ) , θ } d f = − fs 2 fs R θ = 1 Z 2 log + [ S X ( f ) / θ ] d f 2 − fs 2 S X ( f ) θ f s 13 /17

  43. Real Stationary Gaussian Processes encode sample bitrate R decode ¯ X t b X n X t representation S X ( f ) = F { E X t X 0 } ( f ) Theorem [K., Goldsmith, Eldar, ‘14] fs Z 2 mmse ( f s ) + D X ( f s , R ) min { S X ( f ) , θ } d f = − fs 2 fs R θ = 1 Z 2 log + [ S X ( f ) / θ ] d f 2 − fs 2 S X ( f ) distortion f Nyq θ D X ( R ) f s mmse ( f s ) f s 13 /17

  44. Real Stationary Gaussian Processes encode sample bitrate R decode ¯ X t b X n X t representation S X ( f ) = F { E X t X 0 } ( f ) Theorem [K., Goldsmith, Eldar, ‘14] fs Z 2 mmse ( f s ) + D X ( f s , R ) min { S X ( f ) , θ } d f = − fs 2 fs R θ = 1 Z 2 log + [ S X ( f ) / θ ] d f 2 − fs 2 S X ( f ) distortion f Nyq D f R ( f X , R ) s θ D X ( R ) f s mmse ( f s ) f s 13 /17

  45. Classification of Gaussian Processes 14 /17

  46. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) 14 /17

  47. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) 14 /17

  48. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) Wiener process R/f s → 0 14 /17

  49. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) Wiener process R/f s → 0 ( ) S f X bandlimited Gaussian processes θ R/f s → ∞ f s 14 /17

  50. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) Wiener process R/f s → 0 ( ) S f X bandlimited Gaussian processes θ R/f s → ∞ f s S ( ) f X Gauss-Markov (Ornstein–Uhlenbeck) process R/f s → 1 / ln 2 θ f s 14 /17

  51. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) 15 /17

  52. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) S ( ) f X R/f s → ∞ Class 1: θ processes with rapidly decreasing spectrum f s 15 /17

  53. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) S ( ) f X R/f s → ∞ Class 1: θ processes with rapidly decreasing spectrum f s ( ) S f X R/f s < ∞ Class 2: processes with slowly decreasing spectrum θ f s 15 /17

  54. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) S ( ) f X R/f s → ∞ Class 1: θ processes with rapidly decreasing spectrum f s challenge in encoding: high-resolution quantization ( ) S f X R/f s < ∞ Class 2: processes with slowly decreasing spectrum θ f s 15 /17

  55. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) S ( ) f X R/f s → ∞ Class 1: θ processes with rapidly decreasing spectrum f s challenge in encoding: high-resolution quantization ( ) S f X R/f s < ∞ Class 2: processes with slowly decreasing spectrum θ f s challenge in encoding: adapting to high innovation rate 15 /17

  56. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) S ( ) f X R/f s → ∞ Class 1: θ processes with rapidly decreasing spectrum f s challenge in encoding: high-resolution quantization ( ) S f X R/f s < ∞ Class 2: processes with slowly decreasing spectrum θ f s challenge in encoding: adapting to high innovation rate Z f s 1 log + S X ( f ) lim S X ( f s ) d f < ∞ f s →∞ f s − f s 15 /17

  57. Classification of Gaussian Processes ❓ D ( f s , R ) f s ( R ) R → ∞ as how to set so that → 1 D W ( R ) S ( ) f X R/f s → ∞ Class 1: θ processes with rapidly decreasing spectrum f s challenge in encoding: high-resolution quantization Z f s 1 log + S X ( f ) lim f = ∞ S X ( f s ) d f s f s →∞ − f s ( ) S f X R/f s < ∞ Class 2: processes with slowly decreasing spectrum θ f s challenge in encoding: adapting to high innovation rate Z f s 1 log + S X ( f ) lim S X ( f s ) d f < ∞ f s →∞ f s − f s 15 /17

  58. Summary 16 /17

  59. Summary encoding a realization of the Wiener process involves sampling and quantization (encoding) 16 /17

  60. Summary encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: 1 bit per sample attains 1.12 times the optimal distortion at the same bitrate 16 /17

  61. Summary encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: 1 bit per sample attains 1.12 times the optimal distortion at the same bitrate sampling rate must increase faster than bitrate in order to get D ( f s , R ) /D W ( R ) → 1 16 /17

  62. Summary encoding a realization of the Wiener process involves sampling and quantization (encoding) closed-form expression for distortion at any sampling rate and bitrate: 1 bit per sample attains 1.12 times the optimal distortion at the same bitrate sampling rate must increase faster than bitrate in order to get D ( f s , R ) /D W ( R ) → 1 a new way to classify spectrum of continuous-time signals: D X ( f s ( R ) , R ) /D X ( R ) → 1 R/f s → ∞ Class 1: (bandlimited, rapidly decreasing PSD) (Wiener, Gauss-Markov,…) R/f s < ∞ Class 2: 16 /17

  63. The End! A. Kipnis, A. J. Goldsmith and Y. C. Eldar, “Rate-distortion function of sampled Wiener processes”, on Arxiv A. Kipnis, A. J. Goldsmith, Y. C. Eldar, T. Weissman, “Distortion-rate function of sub-Nyquist sampled Gaussian sources”, IEEE Trans. Info. Th. Distortion 4 sin 2 ( πφ / 2) − 1 1 6 D W ( R ) 2 R − 1 π 2 ln 2 mmse ( f s ) θ φ 1 f s [smp/sec] 17 /17

  64. Classification of Gaussian Processes 18 /17

  65. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) 18 /17

  66. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) how to set so that → 1 D W ( R ) 18 /17

  67. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) how to set so that → 1 D W ( R ) Wiener process: R/f s → 0 18 /17

  68. Classification of Gaussian Processes ) → 0 for zero distortion must have R → ∞ D W ( R ) ❓ D ( f s , R ) f s ( R ) how to set so that → 1 D W ( R ) Wiener process: R/f s → 0 ( ) S f X bandlimited Gaussian processes [K., Goldsmith, Eldar, Weissman ‘13]: θ R/f s → ∞ f s 18 /17

Recommend


More recommend