revisiting zero rate bounds on the reliability function
play

Revisiting Zero-Rate Bounds on the Reliability Function of Discrete - PowerPoint PPT Presentation

Revisiting Zero-Rate Bounds on the Reliability Function of Discrete Memoryless Channels Marco Bondaschi & Marco Dalai Department of Information Engineering University of Brescia, Italy ISIT 2020 Setting 2 / 21 Setting 2 / 21 Setting 2


  1. Revisiting Zero-Rate Bounds on the Reliability Function of Discrete Memoryless Channels Marco Bondaschi & Marco Dalai Department of Information Engineering University of Brescia, Italy ISIT 2020

  2. Setting 2 / 21

  3. Setting 2 / 21

  4. Setting 2 / 21

  5. Setting 2 / 21

  6. Setting 2 / 21

  7. Setting 2 / 21

  8. Setting 2 / 21

  9. Setting 2 / 21

  10. Setting Code : set of M codewords C = { x 1 , x 2 , . . . , x M } ⊂ X n 3 / 21

  11. Setting Code : set of M codewords C = { x 1 , x 2 , . . . , x M } ⊂ X n Rate : R = log M n 3 / 21

  12. Setting Code : set of M codewords C = { x 1 , x 2 , . . . , x M } ⊂ X n Rate : R = log M n Decoding regions : y ∈ Y n : m ∈ L ( y ) � � Y m = 3 / 21

  13. Setting Code : set of M codewords C = { x 1 , x 2 , . . . , x M } ⊂ X n Rate : R = log M n Decoding regions : y ∈ Y n : m ∈ L ( y ) � � Y m = Probability of error : � P e,m = P m ( y ) P e, max = max m ∈M P e,m y / ∈ Y m 3 / 21

  14. Setting Code : set of M codewords C = { x 1 , x 2 , . . . , x M } ⊂ X n Rate : R = log M n Decoding regions : y ∈ Y n : m ∈ L ( y ) � � Y m = Probability of error : � P e,m = P m ( y ) P e, max = max m ∈M P e,m y / ∈ Y m L -list reliability function : n →∞ − log P e, max P e, max ≈ e − n E L ( R ) E L ( R ) = lim n 3 / 21

  15. Reliability Function 4 / 21

  16. Reliability Function 4 / 21

  17. Reliability Function 4 / 21

  18. Reliability Function 4 / 21

  19. Reliability Function 4 / 21

  20. Outline of the Proof 1 Lower-bound P e, max for codes with L + 1 codewords. 5 / 21

  21. Outline of the Proof 1 Lower-bound P e, max for codes with L + 1 codewords. Berlekamp and Blinovsky’s approach : study of a gradient on the boundary of the probability simplex − → cumbersome for L > 1 . 5 / 21

  22. Outline of the Proof 1 Lower-bound P e, max for codes with L + 1 codewords. Berlekamp and Blinovsky’s approach : study of a gradient on the boundary of the probability simplex − → cumbersome for L > 1 . Our approach : method of types + trick by Shayevitz − → straightforward for L > 1 . 5 / 21

  23. Outline of the Proof 1 Lower-bound P e, max for codes with L + 1 codewords. Berlekamp and Blinovsky’s approach : study of a gradient on the boundary of the probability simplex − → cumbersome for L > 1 . Our approach : method of types + trick by Shayevitz − → straightforward for L > 1 . 2 For M ≥ L + 1 codewords, P e, max is lower-bounded by the largest bound over all subsets of L + 1 codewords. 5 / 21

  24. Outline of the Proof 3 Upper bound the smallest error exponent with the average over all ( L + 1) -subcodes. 6 / 21

  25. Outline of the Proof 3 Upper bound the smallest error exponent with the average over all ( L + 1) -subcodes. 4 Bound the average over a carefully selected subcode. 6 / 21

  26. Outline of the Proof 3 Upper bound the smallest error exponent with the average over all ( L + 1) -subcodes. 4 Bound the average over a carefully selected subcode. Berlekamp and Blinovsky’s approach : Selection of ordered subcode + complex iterative concatenation of codewords − → cumbersome for L > 1 . 6 / 21

  27. Outline of the Proof 3 Upper bound the smallest error exponent with the average over all ( L + 1) -subcodes. 4 Bound the average over a carefully selected subcode. Berlekamp and Blinovsky’s approach : Selection of ordered subcode + complex iterative concatenation of codewords − → cumbersome for L > 1 . Our approach : Selection of subcode using Ramsey theory + theorem by Komlós (Blinovsky’s idea for L = 1 ) − → straightforward for L > 1 . 6 / 21

  28. Outline of the Proof 3 Upper bound the smallest error exponent with the average over all ( L + 1) -subcodes. 4 Bound the average over a carefully selected subcode. Berlekamp and Blinovsky’s approach : Selection of ordered subcode + complex iterative concatenation of codewords − → cumbersome for L > 1 . Our approach : Selection of subcode using Ramsey theory + theorem by Komlós (Blinovsky’s idea for L = 1 ) − → straightforward for L > 1 . 5 Show that for the subcode E L (0) = E L, ex (0) . 6 / 21

  29. 1. Probability of Error for L + 1 Codewords 7 / 21

  30. 1. Probability of Error for L + 1 Codewords For any vector x = ( x 1 , . . . , x L +1 ) ∈ X L +1 , q ( x ) = fraction of times the code has x as a column. 7 / 21

  31. 1. Probability of Error for L + 1 Codewords For any vector x = ( x 1 , . . . , x L +1 ) ∈ X L +1 , q ( x ) = fraction of times the code has x as a column. Fundamental concave function: for any probability vector α , P ( y | x 1 ) α 1 · · · P ( y | x L +1 ) α L +1 � � µ ( α ) = q ( x ) µ x ( α ) , µ x ( α ) = − log x ∈X L +1 y ∈Y 7 / 21

  32. 1. Probability of Error for L + 1 Codewords For any vector x = ( x 1 , . . . , x L +1 ) ∈ X L +1 , q ( x ) = fraction of times the code has x as a column. Fundamental concave function: for any probability vector α , P ( y | x 1 ) α 1 · · · P ( y | x L +1 ) α L +1 � � µ ( α ) = q ( x ) µ x ( α ) , µ x ( α ) = − log x ∈X L +1 y ∈Y Objective : prove that P e, max ≥ e − n D M , D M = max µ ( α ) α 7 / 21

  33. 1. Probability of Error for L + 1 Codewords Berlekamp & Blinovsky’s approach : Auxiliary distribution on Y n that depends on ∇ µ ( α ) . 8 / 21

  34. 1. Probability of Error for L + 1 Codewords Berlekamp & Blinovsky’s approach : Auxiliary distribution on Y n that depends on ∇ µ ( α ) . Requires careful analysis of the behavior of ∇ µ ( α ) when µ ( α ) is maximized on the border of the probability simplex { α } → Easy for L = 1 , complicated for L > 1 . 8 / 21

  35. 1. Probability of Error for L + 1 Codewords Berlekamp & Blinovsky’s approach : Auxiliary distribution on Y n that depends on ∇ µ ( α ) . Requires careful analysis of the behavior of ∇ µ ( α ) when µ ( α ) is maximized on the border of the probability simplex { α } → Easy for L = 1 , complicated for L > 1 . Our approach : method of types + a result by Shayevitz → Much more straightforward to generalize to L > 1 . 8 / 21

  36. 1. Probability of Error for L + 1 Codewords Case L = 1 : two messages M = { 1 , 2 } . 9 / 21

  37. 1. Probability of Error for L + 1 Codewords Case L = 1 : two messages M = { 1 , 2 } . Method of types : for output sequences y Same conditional type V same conditional probabilities = ⇒ given ( x 1 , x 2 ) P 1 ( y ) and P 2 ( y ) 9 / 21

  38. 1. Probability of Error for L + 1 Codewords Case L = 1 : two messages M = { 1 , 2 } . Method of types : for output sequences y Same conditional type V same conditional probabilities = ⇒ given ( x 1 , x 2 ) P 1 ( y ) and P 2 ( y ) Decoding regions on conditional types: � � � � Y 1 = y : P 1 ( y ) > P 2 ( y ) ⇐ ⇒ T 1 = V : D ( V || P 1 ) < D ( V || P 2 ) � � D ( V || P 1 ) = � V ( ·| a, b ) || P ( ·| a ) � q ( a, b ) D a ∈X b ∈X 9 / 21

  39. 1. Probability of Error for L + 1 Codewords Binary hypothesis testing (Cover & Thomas): 10 / 21

  40. 1. Probability of Error for L + 1 Codewords Binary hypothesis testing (Cover & Thomas): 10 / 21

  41. 1. Probability of Error for L + 1 Codewords Case L > 1 : same approach with L + 1 messages M = { 1 , 2 , . . . , L + 1 } . 11 / 21

  42. 1. Probability of Error for L + 1 Codewords Case L > 1 : same approach with L + 1 messages M = { 1 , 2 , . . . , L + 1 } . One message left out from each list. 11 / 21

  43. 1. Probability of Error for L + 1 Codewords Case L > 1 : same approach with L + 1 messages M = { 1 , 2 , . . . , L + 1 } . One message left out from each list. Decoding regions: � � Y m = y : P m ( y ) > P i ( y ) for some i � � V : D ( V || P m ) < D ( V || P i ) for some i � T m = 11 / 21

  44. 1. Probability of Error for L + 1 Codewords Case L > 1 : same approach with L + 1 messages M = { 1 , 2 , . . . , L + 1 } . One message left out from each list. Decoding regions: � � Y m = y : P m ( y ) > P i ( y ) for some i � � V : D ( V || P m ) < D ( V || P i ) for some i � T m = � � � D ( V || P m ) = q ( x ) D V ( ·| x ) || P ( ·| x m ) x ∈X L +1 11 / 21

  45. 1. Probability of Error for L + 1 Codewords When n → ∞ , same dominant exponent in all regions: D ( Q || P 1 ) = · · · = D ( Q || P L +1 ) D M = min min Q/ ∈ T 1 Q/ ∈ T L +1 � � P e, max ≥ e − n D M + o (1) 12 / 21

  46. 1. Probability of Error for L + 1 Codewords When n → ∞ , same dominant exponent in all regions: D ( Q || P 1 ) = · · · = D ( Q || P L +1 ) D M = min min Q/ ∈ T 1 Q/ ∈ T L +1 � � P e, max ≥ e − n D M + o (1) Alternative expression for µ ( α ) by Shayevitz (2010) + minimax theorem: D M = max µ ( α ) α 12 / 21

  47. 2. Probability of Error for M ≥ L + 1 Codewords For a code C with M ≥ L + 1 messages M = { 1 , . . . , M } : 13 / 21

  48. 2. Probability of Error for M ≥ L + 1 Codewords For a code C with M ≥ L + 1 messages M = { 1 , . . . , M } : Pick the ( L + 1) -subcode ˆ C ⊂ C with smallest error exponent: D min ( C ) = min C ⊂C max µ C ( α ) α 13 / 21

Recommend


More recommend