more on the reliability function of the bsc
play

More on the Reliability Function of the BSC Alexander Barg Andrew - PowerPoint PPT Presentation

More on the Reliability Function of the BSC Alexander Barg Andrew McGregor DIMACS, Rutgers University University of Pennsylvania ISIT 2003, Yokohama Some Definitions Some Definitions Communicating over a binary symmetric channel with


  1. More on the Reliability Function of the BSC Alexander Barg Andrew McGregor DIMACS, Rutgers University University of Pennsylvania ISIT 2003, Yokohama

  2. Some Definitions

  3. Some Definitions  Communicating over a binary symmetric channel with cross-over probability p .

  4. Some Definitions  Communicating over a binary symmetric channel with cross-over probability p .  We use a length n binary code C= { x 1 , x 2 , … x |C| } with rate ≥ R ie.

  5. Some Definitions  Communicating over a binary symmetric channel with cross-over probability p .  We use a length n binary code C= { x 1 , x 2 , … x |C| } with rate ≥ R ie. |C| ≥ 2 nR

  6. Some Definitions  Communicating over a binary symmetric channel with cross-over probability p .  We use a length n binary code C= { x 1 , x 2 , … x |C| } with rate ≥ R ie. |C| ≥ 2 nR  No matter what code we use there is the possibility of making errors - for a given rate of transmission there is some degree of error that is inherent to the channel itself.

  7. Making Decoding Errors Maximum Likelihood Decoding: When we  receive a word y we’ll guess that the sent codeword is the codeword that lies closest to it. For each codeword x we define the Voronoi  region: Let P e ( x ) be the probability that, when  codeword x is transmitted, this decoding procedure leads to an error. Therefore we have

  8. Making Decoding Errors Maximum Likelihood Decoding: When we  receive a word y we’ll guess that the sent codeword is the codeword that lies closest to it. For each codeword x we define the Voronoi  region: D ( x ) = { y ∈ {0,1} n : d ( x , y ) < d ( x j , y ) ∀ x j ∈ C \ x } Let P e ( x ) be the probability that, when  codeword x is transmitted, this decoding procedure leads to an error. Therefore we have

  9. Making Decoding Errors Maximum Likelihood Decoding: When we  receive a word y we’ll guess that the sent codeword is the codeword that lies closest to it. For each codeword x we define the Voronoi  region: D ( x ) = { y ∈ {0,1} n : d ( x , y ) < d ( x j , y ) ∀ x j ∈ C \ x } Let P e ( x ) be the probability that, when  codeword x is transmitted, this decoding procedure leads to an error. Therefore we have x ({0,1} n \ D ( x )) P e ( x ) = P

  10. The Reliability Function  The average error probability of decoding is  We’re interested in  We present a new lower bound for this quantity, or equivalently, an upper bound on the reliability function or error exponent of the channel:

  11. The Reliability Function  The average error probability of decoding is e ( C ) = 1 ∑ P P e ( x ) | C | x ∈ C  We’re interested in  We present a new lower bound for this quantity, or equivalently, an upper bound on the reliability function or error exponent of the channel:

  12. The Reliability Function  The average error probability of decoding is e ( C ) = 1 ∑ P P e ( x ) | C | x ∈ C  We’re interested in P e ( R ) = C : Rate ( C ) > R P min e ( C )  We present a new lower bound for this quantity, or equivalently, an upper bound on the reliability function or error exponent of the channel:

  13. The Reliability Function  The average error probability of decoding is e ( C ) = 1 ∑ P P e ( x ) | C | x ∈ C  We’re interested in P e ( R ) = C : Rate ( C ) > R P min e ( C )  We present a new lower bound for this quantity, or equivalently, an upper bound on the reliability function or error exponent of the channel: 1 [ ] E ( R , p ) = − lim n log C : R ( C ) > R P min e ( C ) n →∞

  14. Bounds on the Error Exponent: E(R,p) • Combination of Best Lower Bounds: [Gallager, 63] & [Elias, ‘56] • Combination of Best Upper Bounds prior to 1999: [Elias, ‘56] & [McEliece et al, ‘77] • Litsyn’s Bound: [Litsyn ‘99] • Our New Bound R p=0.01

  15. Bounds on the Error Exponent: E(R,p) • Combination of Best Lower Bounds: [Gallager, 63] & [Elias, ‘56] • Combination of Best Upper Bounds 1 prior to 1999: [Elias, ‘56], [Shannon et al, ‘67] & [McEliece et al, ‘77] 0.8 • Litsyn’s Bound: [Litsyn ‘99] • Our New Bound 0.6 0.4 0.2 R 0.2 0.4 0.6 0.8 p=0.01

  16. Bounds on the Error Exponent: E(R,p) • Combination of Best Lower Bounds: [Gallager, 63] & [Elias, ‘56] • Combination of Best Upper Bounds 1 prior to 1999: [Elias, ‘56], [Shannon et al, ‘67] & [McEliece et al, ‘77] 0.8 • Litsyn’s Bound: [Litsyn ‘99] • Our New Bound 0.6 0.4 0.2 R 0.2 0.4 0.6 0.8 1 p=0.01

  17. Bounds on the Error Exponent: E(R,p) • Combination of Best Lower Bounds: [Gallager, 63] & [Elias, ‘56] • Combination of Best Upper Bounds 1 prior to 1999: [Elias, ‘56], [Shannon et al, ‘67] & [McEliece et al, ‘77] 0.8 • Litsyn’s Bound: [Litsyn ‘99] • Our New Bound 0.6 0.4 0.2 R 0.2 0.4 0.6 0.8 1 p=0.01

  18. Litsyn’s Distance Distribution Bound  Define  Litsyn’s Distance Distribution Bound: For any code C of rate R there exists a w such that

  19. Litsyn’s Distance Distribution Bound  Define B w ( x ) = |{ x j : d ( x , x j ) = w } |  Litsyn’s Distance Distribution Bound: For any code C of rate R there exists a w such that

  20. Litsyn’s Distance Distribution Bound  Define B w ( x ) = |{ x j : d ( x , x j ) = w } |  Litsyn’s Distance Distribution Bound: For any code C of rate R there exists a w such that B w ( x ) ≥ µ ( R , w )

  21. Estimating P e (x) x x ({0,1} n \ D ( x )) P e ( x ) = P

  22. Estimating P e (x) The Voronoi Region x ∑ p d ( y , x ) (1 − p ) n − d ( y , x ) P e ( x ) = y ∈ C : d ( y , x j ) ≤ d ( y , x ) for some x j ∈ C

  23. Estimating P e (x) Use the distance distribution result… x w ∑ p d ( y , x ) (1 − p ) n − d ( y , x ) P e ( x ) = y ∈ C : d ( y , x j ) ≤ d ( y , x ) for some x j ∈ C

  24. Estimating P e (x) Approximating the Voronoi Region… x ∑ p d ( y , x ) (1 − p ) n − d ( y , x ) P e ( x ) ≥ y ∈ C : d ( y , x j ) ≤ d ( y , x ) for some x j ∈ C where d ( x , x j ) = w

  25. Estimating P e (x) Introducing the X j … For each neighbour x j define a set X j such that y ∈ X j ⇒ x d ( y , x j ) ≤ d ( y , x ) U P e ( x ) ≥ P x ( X j ) j : d ( x , x j ) = w

  26. Estimating P e (x) “Pruning” the X j … For each neighbour x j assign a priority n j at random. Let U Y j = X j \ X k x k : n k > n j ∑ P e ( x ) ≥ P x ( Y j ) j : d ( x , x j ) = w

  27. Estimating P e (x) Applying the Reverse Union Bound… The Reverse Union Bound: Giving us our final shape of our bound:

  28. Estimating P e (x) Applying the Reverse Union Bound… The Reverse Union Bound: U P x ( Y j ) = P x ( X j \ X k ) k : n k > n j ∑ ≥ P x ( X j )(1 − P x ( X k | X j ) ) k : n k > n j Giving us our final shape of our bound:

  29. Estimating P e (x) Applying the Reverse Union Bound… The Reverse Union Bound: U P x ( Y j ) = P x ( X j \ X k ) k : n k > n j ∑ ≥ P x ( X j )(1 − P x ( X k | X j ) ) k : n k > n j Giving us our final shape of our bound: ∑ ∑ P e ( x ) ≥ P x ( X j )(1 − P x ( X k | X j ) ) j : d ( x , x j ) = w k : n k > n j

  30.  Now look across the entire code. Let X ij and Y ij be the sets for the neighbourhood of codeword x i .  Therefore we have: and where, the amount of “pruning” is  What we do now depends on the values of the K ij …

  31.  Now look across the entire code. Let X ij and Y ij be the sets for the neighbourhood of codeword x i .  Therefore we have: ∑ P e ( x i ) ≥ P i ( Y ij ) j : d ( x i , x j ) = w and where, the amount of “pruning” is  What we do now depends on the values of the K ij …

  32.  Now look across the entire code. Let X ij and Y ij be the sets for the neighbourhood of codeword x i .  Therefore we have: ∑ P e ( x i ) ≥ P i ( Y ij ) j : d ( x i , x j ) = w and P ( Y ij ) ≥ P i ( X ij )(1 − K ij ) where, the amount of “pruning” is  What we do now depends on the values of the K ij …

  33.  Now look across the entire code. Let X ij and Y ij be the sets for the neighbourhood of codeword x i .  Therefore we have: ∑ P e ( x i ) ≥ P i ( Y ij ) j : d ( x i , x j ) = w and P ( Y ij ) ≥ P i ( X ij )(1 − K ij ) where, the amount of “pruning” is ∑ K ij = P i ( X ik | X ij ) k : n ik > n ij  What we do now depends on the values of the K ij …

  34.  Consider the set of codewords

  35.  Consider the set of codewords S ={ x j : K ij > 1/2 for some i }

  36.  Consider the set of codewords S ={ x j : K ij > 1/2 for some i }  Either this is a “substantially” sized subcode or it isn’t.

  37.  Consider the set of codewords S ={ x j : K ij > 1/2 for some i }  Either this is a “substantially” sized subcode or it isn’t.  Ie, either we had to do a lot of pruning or we didn’t have to do a lot of pruning.

  38. If S was not substantially sized…  Just remove codewords in S from the code!  Then in the remaining code we have for all Y ij P i (Y ij ) ≥ P i (X ij )/2  Hence, modulo constant factors, the average error probability satisfies P e (C,p ) ≥ A(w) µ (w)  where A(w)= P i (X ij )

Recommend


More recommend