physical layer error detection correction
play

Physical layer Error detection, correction Martin Heusse X L A - PowerPoint PPT Presentation

Physical layer Error detection, correction Martin Heusse X L A TEX E Error detection Idea = Add redundant information in the transmitted data to check for errors or even correct them Also used for data storage, memory (Am I


  1. Physical layer — Error detection, correction Martin Heusse X L A TEX E

  2. Error detection ✌ Idea = Add redundant information in the transmitted data to check for errors or even correct them • Also used for data storage, memory (Am I reading exactly what I stored?) • Parity bit • How many errors can you detect with a parity bit? X L A TEX E

  3. Hamming distance • data is read by blocks of m bits, we add r bits to form n = n+r bits-long code words • Given two code words, the number of differing bits is the Hamming distance • h = Min Disth(x1, x2) ; x1 and x2 ∈ M M is the set of 2 m valid code words • To detect x errors, we need to have h ≥ x+1 Parity bit : h=2, we can detect 1 error. (and sometimes more) X L A TEX E

  4. Checksum • h = 2, but can detect k errors in a row k=8 0 0 0 1 1 1 0 1 n=3 + 0 0 0 1 0 0 0 1 + 0 0 1 1 0 0 0 0 = 0 1 0 1 1 1 1 0 Complément à 1 1 's complement Checksum 1 0 1 0 0 0 0 1 X A TEX L E

  5. CRC • A block of bit is the representation of a polynom (coefficients in [0,1]), Example: 1 0 1 0 1 is x 4 + x 2 + 1 • No carry, subtraction ⇔ addition ⇔ XOR • T= Quotient*G + remainder, so (T + Remainder)%G = 0 n G T: Données Data 000...0 Quotient Remainder Reste X A TEX L E

  6. CRC (cont.) 1 0 1 1 1 0 1 0 1 0 0 1 0 1 1 1 1 0 1 1 0 1 1 1 1 0 1 0 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 1 0 1 Remainder Reste= 0 1 1 • With n= 16, 1 or 2 errors are detected, all odd numbers of errors, and all series of errors of less than 16 bits length. X L A TEX E

  7. Error correction • h should be big enough so that if an error occurs, the obtained word is still closer from the original than any other word with 1 error. → ( n + 1) 2 m � 2 n or: ( m + r + 1) � 2 r (for every 2 m code words, there are n possible words with 1 error, + the code word) Example: for m = 4 , we need r � 3 (or 4 bits for messages of 8…) X L A TEX E

  8. Hamming error correcting code • Redundancy bits are inserted at all powers of 2 (1, 2, 4,…) • Upon reception, the sum of the positions of the control bits in error gives the position of the error • Example: bit 1 such that (1, 3, 5, 7 , 9, 11, …) is of even parity bit 2 such that (2, 3, 6, 7 , 10, 11,…) is of even parity bit 4 such that (4, 5, 6, 7 , 12, 13,…) is of even parity if bits 1, 2 and 4 are wrong, then an error occurred at bit 7! (7=1+2+4) r r r r 1 2 3 4 5 6 7 8 9 10 11 12 • This is the minimum number of bits for error correction X L A TEX E

  9. Convolution codes input k bits k x L bits at a time … + + + + + n Adders … output n bits at a time • ( k − 1) × L bits of memory • Viterbi (Trellis) decoding X L A TEX E

  10. Trellis decoding output(i) input(i) + output(i+1) Input: 1 xx memory state Input: 0 xx 0 00 00 00 00 11 11 11 Etc. 01 01 01 01 1 10 10 10 X A TEX L E

  11. Interleaving a 1 a 2 a 3 a 4 b 1 b 2 b 3 b 4 c 1 c 2 c 3 c 4 d 1 d 2 d 3 d 4 1 codeword (including redundancy bits) What is sent on the channel: a 1 b 1 c 1 d 1 a 2 b 2 c 2 d 2 a 3 b 3 c 3 d 3 a 4 b 4 c 4 d 4 X A TEX L E

Recommend


More recommend