15 853 algorithms in the real world
play

15-853:Algorithms in the Real World Reed Solomon Codes (Cont.) - PowerPoint PPT Presentation

15-853:Algorithms in the Real World Reed Solomon Codes (Cont.) Concatenation of codes Start with LDPC codes Announcements: 1. No class this Thursday, Sept. 19. Rescheduled to Friday, Sept 27. 2. Homework1 on ECC will be released on


  1. 15-853:Algorithms in the Real World • Reed Solomon Codes (Cont.) • Concatenation of codes • Start with LDPC codes Announcements: 1. No class this Thursday, Sept. 19. Rescheduled to Friday, Sept 27. 2. Homework1 on ECC will be released on Tuesday Sept 24. Submission deadline Oct 4 th noon . 15-853 Page1

  2. Recap: Block Codes message (m) Each message and codeword is of fixed size coder  = codeword alphabet k =|m| n = |c| q = |  | codeword (c) noisy C = “code” = set of codewords channel C  S n (codewords) codeword ’ (c’) decoder D (x,y) = number of positions s.t. x i  y i d = min{ D (x,y) : x,y  C, x  y} message or error Code described as: (n,k,d) q 15-853 Page2

  3. Recap: Linear Codes If  is a field, then  n is a vector space Definition : C is a linear code if it is a linear subspace of  n of dimension k. This means that there is a set of k independent vectors v i   n (1  i  k) that span the subspace. i.e. every codeword can be written as: where a i   c = a 1 v 1 + a 2 v 2 + … + a k v k “Linear”: linear combination of two codewords is a codeword. Minimum distance = weight of least-weight codeword 15-853 Page3

  4. Recap: Generator and Parity Check Matrices Generator Matrix : A k x n matrix G such that: C = { xG | x   k } Made from stacking the spanning vectors Parity Check Matrix : An (n – k) x n matrix H such that: C = {y   n | Hy T = 0} (Codewords are the null space of H.) These always exist for linear codes 15-853 Page4

  5. Recap: Singleton bound and MDS codes Theorem: For every (n , k, d) q code, n ≥ k + d – 1 Codes that meet Singleton bound with equality are called Maximum Distance Separable (MDS) Only two binary MDS codes! 1. Repetition codes 2. Single-parity check codes Need to go beyond the binary alphabet! (We will need some number theory for this) 15-853 Page 5

  6. Recap: Finite fields • Size (or order): Prime or power of prime • Power-of-prime finite fields: • Constructed using polynomials • Mod by irreducible polynomial • Correspondence between polynomials and vector representation 15-853 Page6

  7. Recap: GF(2 n ) 𝔾 2 𝑜 = set of polynomials in 𝔾 2 [𝑦] modulo irreducible polynomial p 𝑦 ∈ 𝔾 2 𝑦 of degree 𝑜 . Elements are all polynomials in 𝔾 2 [𝑦] of degree ≤ 𝑜 − 1. Has 2 𝑜 elements. Natural correspondence with bits in 𝟏, 𝟐 𝒐 . Elements of 𝔾 𝟑 𝟗 can be represented as a byte, one bit for each term. E.g., x 6 + x 4 + x + 1 = 01010011 15-853 Page 7

  8. RS code: Polynomials viewpoint Message : [a k-1 , …, a 1 , a 0 ] where a i  GF(q r ) Consider the polynomial of degree k-1 P(x) = a k-1 x k-1 + L + a 1 x + a 0 RS code: Codeword : [P(1), P(2), …, P(n)] To make the i in p(i) distinct, need field size q r ≥ n That is, need sufficiently large field size for desired codeword length. 15-853 Page8

  9. Recap: Minimum distance of RS code Theorem: RS codes have minimum distance d = n-k+1 Proof: 1. RS is a linear code: if we add two codewords corresponding to P(x) and Q(x), we get a codeword corresponding to the polynomial P(x) + Q(x). Similarly any linear combination.. 2. So look at the least weight codeword . It is the evaluation of a polynomial of degree k-1 at some n points. So it can be zero on only k-1 points. Hence non-zero on at most (n-(k-1)) points. This means distance at least n-k+1 3. Apply Singleton bound Meets Singleton bound: RS codes are MDS 15-853 Page9

  10. Recap: Generator matrix of RS code Q: What is the generator matrix? <board> “ Vandermonde matrix” Special property of Vandermonde matrices: Full rank (columns linearly independent) Vandermonde matrix: Very useful in constructing codes. 15-853 Page10

  11. Next we move on to RS decoding 15-853 Page11

  12. Polynomials and their degrees Fundamental theorem of Algebra: Any non-zero polynomial of degree k has at most k roots (over any field). Corollary 1: If two degree-k polynomials P, Q agree on k+1 locations (i.e., if 𝑄 𝑦 𝑗 = 𝑅(𝑦 𝑗 ) for 𝑦 0 , 𝑦 1 , … , 𝑦 𝑙 ), then P = Q. Corollary 2: Given any k+1 points (𝑦 𝑗 , 𝑧 𝑗 ) , there is at most one degree-k polynomial that has 𝑄 𝑦 𝑗 = 𝑧 𝑗 for all these i. 15-853 Page12

  13. Polynomials and their degrees Corollary 2: Given any k+1 points (𝑦 𝑗 , 𝑧 𝑗 ) , there is at most one degree-k polynomial that has 𝑄 𝑦 𝑗 = 𝑧 𝑗 for all these i. Theorem: Given any k+1 points (𝑦 𝑗 , 𝑧 𝑗 ) , there is exactly one degree-k polynomial that has 𝑄 𝑦 𝑗 = 𝑧 𝑗 for all these i. Proof: e.g., use Lagrange interpolation. 15-853 Page13

  14. Decoding: Recovering Erasures Recovering from at most (d-1) erasures: Received codeword: [P(1), *, …, *, P(n)]: at most (d -1) symbols erased Ideas? 1. At most n-k symbols erased 2. So have p(i) for at least k evaluations 3. Interpolation to recover the polynomial Matrix viewpoint: ideas? 15-853 Page14

  15. RS Code A (n, k, 2s +1) code: k 2s n Can detect 2s errors Can correct s errors Generally can correct a erasures and b errors if a + 2b  2s 15-853 Page15

  16. Decoding: Correcting Errors Correcting s errors : (d = 2s+1) Naïve algo: – Find k+s symbols that agree on a degree (k-1) poly P(x). • There must exist one: since originally k + 2s symbols agreed and at most s are in error (i.e., “guess” the n -s uncorrupted locations) – Can we go wrong? Are there k+s symbols that agree on the wrong degree (k- 1) polynomial P’(x)? No. • Any subset of k symbols will define P’(x) • Since at most s out of the k+s symbols are in error, P’(x) = p(x) 15-853 Page16

  17. Decoding: Correcting Errors Correcting s errors : (d = 2s+1) Naïve algo: – Find k+s symbols that agree on a degree (k-1) poly P(x). • There must exist one: since originally k + 2s symbols agreed and at most s are in error (i.e., “guess” the n -s uncorrupted locations) But this suggests a brute-force approach, very inefficient. “guess” = “enumerate”, so time is (n choose s) ~ n^s. More efficient algorithms exist. 15-853 Page17

  18. The Berlekamp Welch Algorithm Say we sent 𝑑 𝑗 = 𝑄(𝑗) for 𝑗 = 1. . 𝑜 ′ where 𝑑 𝑗 = 𝑑 𝑗 ′ for all but s locations. Received 𝑑 𝑗 Let S be the set of these s error locations. Suppose we magically know “error - locator” polynomial 𝐹 𝑦 such that 𝐹 𝑦 = 0 for all x in S. And 𝐹(𝑦) has degree s. Does such a thing exist? 𝐹 𝑦 = ∏ 𝑏 𝑗𝑜 𝑇 (𝑦 − 𝑏) Sure. 15-853 Page18

  19. The Berlekamp Welch Algorithm Say we sent 𝑑 𝑗 = 𝑄(𝑗) for 𝑗 = 1. . 𝑜 ′ where 𝑑 𝑗 = 𝑑 𝑗 ′ for all but s locations. Received 𝑑 𝑗 Let S be the set of these s error locations. Suppose we magically know “error - locator” polynomial 𝐹 𝑦 such that 𝐹 𝑦 = 0 for all x in S. And 𝐹(𝑦) has degree s. Then we know that ′ ⋅ 𝐹 𝑗 𝑄 𝑗 ⋅ 𝐹(𝑗) = 𝑑 𝑗 for all 𝑗 𝑗𝑜 1. . 𝑜 15-853 Page19

  20. The Berlekamp Welch Algorithm Know that ′ ⋅ 𝐹 𝑗 𝑄 𝑗 ⋅ 𝐹(𝑗) = 𝑑 𝑗 for all 𝑗 𝑗𝑜 1. . 𝑜 Want to solve for polys 𝑄(𝑦) (of deg 𝑙 − 1 ), 𝐹(𝑦) of deg 𝑡 . How? First, rewrite as: ′ ⋅ 𝐹 𝑗 R 𝑗 = 𝑑 𝑗 for all 𝑗 𝑗𝑜 1. . 𝑜 for polynomials R of degree (k+s-1), E of degree s. R has k+s “degrees of freedom”. E has s+1. Have n equalities. So perhaps can get solution if (𝑙 + 𝑡) + (𝑡 + 1) ≥ 𝑜 . 𝑆 𝑦 𝐹 𝑦 . Return 15-853 Page20

  21. The current situation We know that ′ ⋅ 𝐹 𝑗 𝑆 𝑗 = 𝑑 𝑗 for all 𝑗 𝑗𝑜 1. . 𝑜 𝑘 𝑦 𝑘 Suppose R x = σ 𝑘=1..𝑙+𝑡−1 𝑠 𝑙 + 𝑡 unknowns (the 𝑠 𝑗 values) 𝑘 𝑦 𝑘 And 𝐹 𝑦 = σ 𝑘=0..𝑡 𝑓 𝑡 + 1 unknowns (the 𝑓 𝑗 values) How to solve for 𝑆 𝑦 , 𝐹 𝑦 ? 15-853 Page21

  22. The linear system Linear equalities 1 ⋅ 1 + 𝑠 2 ⋅ 1 2 + … + 𝑠 𝑙+𝑡−1 1 𝑙+𝑡−1 = 𝑑 1 ′ ⋅ 𝑓 0 + 𝑓 1 ⋅ 1 + ⋯ + 𝑓 𝑡 1 𝑡 𝑠 0 + 𝑠 1 ⋅ 2 + 𝑠 2 ⋅ 2 2 + … + 𝑠 𝑙+𝑡−1 2 𝑙+𝑡−1 = 𝑑 2 ′ ⋅ (𝑓 0 + 𝑓 1 ⋅ 2 + ⋯ + 𝑓 𝑡 2 𝑡 ) 𝑠 0 + 𝑠 … 1 ⋅ 𝑗 + 𝑠 2 ⋅ 𝑗 2 + … + 𝑠 𝑙+𝑡−1 𝑗 𝑙+𝑡−1 = 𝑑 𝑗 ′ ⋅ 𝑓 0 + 𝑓 1 ⋅ 𝑗 + ⋯ + 𝑓 𝑡 𝑗 𝑡 𝑠 0 + 𝑠 … 1 ⋅ 𝑜 + 𝑠 2 ⋅ 𝑜 2 + … + 𝑠 𝑙+𝑡−1 𝑜 𝑙+𝑡−1 = 𝑑 𝑜 ′ ⋅ 𝑓 0 + 𝑓 1 ⋅ 𝑜 + ⋯ + 𝑓 𝑡 𝑜 𝑡 𝑠 0 + 𝑠 • Linearly independent equalities. Why? (Vandermonde structure.) • Under-constrained. Why? n equations, (k+s)+(s+1) = n+1 variables. • Can have multiple solutions. Problem? • <board> 15-853 Page22

  23. RS and “burst” bit errors Let’s compare to Hamming Codes. code bits check bits RS (255, 253, 3) 256 2040 16 Hamming (2 11 -1, 2 11 -11-1, 3) 2 2047 11 They can both correct 1 error, but not 2 random errors. – The Hamming code does this with fewer check bits However, RS can fix 8 contiguous bit errors in one byte – Much better than lower bound for 8 arbitrary errors        n n            L log 1 8 log( 7 ) 88 check bits       n     1 8   15-853 Page23

  24. CONCATENATION OF CODES 15-853 Page24

Recommend


More recommend