communicating with errors
play

Communicating with Errors Someone sends you a message: As mmbrof - PowerPoint PPT Presentation

Communicating with Errors Someone sends you a message: As mmbrof teGreek commniand art of n oft oranzins thsis hihly offesive. As you can see, parts of the message have been lost. Communicating with Errors Someone sends you a message:


  1. Implementation of Secret Sharing How large must the prime p be? ◮ Larger than the number of people involved. ◮ Larger than the secret.

  2. Implementation of Secret Sharing How large must the prime p be? ◮ Larger than the number of people involved. ◮ Larger than the secret. If the secret s has n bits, then the secret is O ( 2 n ) .

  3. Implementation of Secret Sharing How large must the prime p be? ◮ Larger than the number of people involved. ◮ Larger than the secret. If the secret s has n bits, then the secret is O ( 2 n ) . So we need p > 2 n .

  4. Implementation of Secret Sharing How large must the prime p be? ◮ Larger than the number of people involved. ◮ Larger than the secret. If the secret s has n bits, then the secret is O ( 2 n ) . So we need p > 2 n . The arithmetic is done with log p = O ( n ) bit numbers.

  5. Implementation of Secret Sharing How large must the prime p be? ◮ Larger than the number of people involved. ◮ Larger than the secret. If the secret s has n bits, then the secret is O ( 2 n ) . So we need p > 2 n . The arithmetic is done with log p = O ( n ) bit numbers. The runtime is a polynomial in the number of bits of the secret and the number of people, i.e., the scheme is efficient .

  6. Sending Packets You want to send a long message.

  7. Sending Packets You want to send a long message. ◮ In Internet communication, the message is divided up into smaller chunks called packets .

  8. Sending Packets You want to send a long message. ◮ In Internet communication, the message is divided up into smaller chunks called packets . ◮ So say you want to send n packets, m 0 , m 1 ,..., m n − 1 .

  9. Sending Packets You want to send a long message. ◮ In Internet communication, the message is divided up into smaller chunks called packets . ◮ So say you want to send n packets, m 0 , m 1 ,..., m n − 1 . ◮ In information theory, we say that you send the packets across a channel .

  10. Sending Packets You want to send a long message. ◮ In Internet communication, the message is divided up into smaller chunks called packets . ◮ So say you want to send n packets, m 0 , m 1 ,..., m n − 1 . ◮ In information theory, we say that you send the packets across a channel . ◮ What happens if the channel is imperfect ?

  11. Sending Packets You want to send a long message. ◮ In Internet communication, the message is divided up into smaller chunks called packets . ◮ So say you want to send n packets, m 0 , m 1 ,..., m n − 1 . ◮ In information theory, we say that you send the packets across a channel . ◮ What happens if the channel is imperfect ? ◮ First model: when you use the channel, it can drop any k of your packets.

  12. Sending Packets You want to send a long message. ◮ In Internet communication, the message is divided up into smaller chunks called packets . ◮ So say you want to send n packets, m 0 , m 1 ,..., m n − 1 . ◮ In information theory, we say that you send the packets across a channel . ◮ What happens if the channel is imperfect ? ◮ First model: when you use the channel, it can drop any k of your packets. Can we still communicate our message?

  13. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) .

  14. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ?

  15. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ? At most n − 1.

  16. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ? At most n − 1. Remember: n points determine a degree ≤ n − 1 polynomial.

  17. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ? At most n − 1. Remember: n points determine a degree ≤ n − 1 polynomial. Then, send ( 0 , P ( 0 )) , ( 1 , P ( 1 )) ,..., ( n + k − 1 , P ( n + k − 1 )) across the channel.

  18. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ? At most n − 1. Remember: n points determine a degree ≤ n − 1 polynomial. Then, send ( 0 , P ( 0 )) , ( 1 , P ( 1 )) ,..., ( n + k − 1 , P ( n + k − 1 )) across the channel. ◮ Note: If the channel drops packets, the receiver knows which packets are dropped.

  19. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ? At most n − 1. Remember: n points determine a degree ≤ n − 1 polynomial. Then, send ( 0 , P ( 0 )) , ( 1 , P ( 1 )) ,..., ( n + k − 1 , P ( n + k − 1 )) across the channel. ◮ Note: If the channel drops packets, the receiver knows which packets are dropped. Property of polynomials: If we receive any n packets, then we can interpolate to recover the message.

  20. Reed-Solomon Codes Encode the packets m 0 , m 1 ,..., m n − 1 as values of a polynomial P ( 0 ) , P ( 1 ) ,..., P ( n − 1 ) . What is deg P ? At most n − 1. Remember: n points determine a degree ≤ n − 1 polynomial. Then, send ( 0 , P ( 0 )) , ( 1 , P ( 1 )) ,..., ( n + k − 1 , P ( n + k − 1 )) across the channel. ◮ Note: If the channel drops packets, the receiver knows which packets are dropped. Property of polynomials: If we receive any n packets, then we can interpolate to recover the message. If the channel drops at most k packets, we are safe.

  21. Alternative Encoding The message has packets m 0 , m 1 ,..., m n − 1 .

  22. Alternative Encoding The message has packets m 0 , m 1 ,..., m n − 1 . Instead of encoding the messages as values of the polynomial, we can encode it as coefficients of the polynomial.

  23. Alternative Encoding The message has packets m 0 , m 1 ,..., m n − 1 . Instead of encoding the messages as values of the polynomial, we can encode it as coefficients of the polynomial. P ( x ) = m n − 1 x n − 1 + ··· + m 1 x + m 0 .

  24. Alternative Encoding The message has packets m 0 , m 1 ,..., m n − 1 . Instead of encoding the messages as values of the polynomial, we can encode it as coefficients of the polynomial. P ( x ) = m n − 1 x n − 1 + ··· + m 1 x + m 0 . Then, send ( 0 , P ( 0 )) , ( 1 , P ( 1 )) ,..., ( n + k − 1 , P ( n + k − 1 )) as before.

  25. Corruptions Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.”

  26. Corruptions Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased , letters are now corrupted.

  27. Corruptions Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased , letters are now corrupted. These are called general errors .

  28. Corruptions Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased , letters are now corrupted. These are called general errors . Can we still recover the original message?

  29. Corruptions Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased , letters are now corrupted. These are called general errors . Can we still recover the original message? In fact, Reed-Solomon codes still do the job!

  30. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 .

  31. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z .

  32. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n .

  33. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k .

  34. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k . The encoded message is longer , because redundancy recovers errors.

  35. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k . The encoded message is longer , because redundancy recovers errors. Let Encode : ( Z / p Z ) n → ( Z / p Z ) n + k be the encoding function.

  36. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k . The encoded message is longer , because redundancy recovers errors. Let Encode : ( Z / p Z ) n → ( Z / p Z ) n + k be the encoding function. Let C := range( Encode ) be the set of codewords .

  37. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k . The encoded message is longer , because redundancy recovers errors. Let Encode : ( Z / p Z ) n → ( Z / p Z ) n + k be the encoding function. Let C := range( Encode ) be the set of codewords . A codeword is a possible encoded message.

  38. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k . The encoded message is longer , because redundancy recovers errors. Let Encode : ( Z / p Z ) n → ( Z / p Z ) n + k be the encoding function. Let C := range( Encode ) be the set of codewords . A codeword is a possible encoded message. We want the codewords to be far apart.

  39. A Broader Look at Coding Suppose we want to send a length- n message, m 0 , m 1 ,..., m n − 1 . Each packet is in Z / p Z . The message ( m 0 , m 1 ,..., m n − 1 ) is in ( Z / p Z ) n . We want to encode the message into ( Z / p Z ) n + k . The encoded message is longer , because redundancy recovers errors. Let Encode : ( Z / p Z ) n → ( Z / p Z ) n + k be the encoding function. Let C := range( Encode ) be the set of codewords . A codeword is a possible encoded message. We want the codewords to be far apart. Separated codewords means we can tolerate errors.

  40. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ.

  41. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties:

  42. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 .

  43. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) .

  44. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) . ◮ Triangle Inequality: d ( s 1 , s 3 ) ≤ d ( s 1 , s 2 )+ d ( s 2 , s 3 ) .

  45. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) . ◮ Triangle Inequality: d ( s 1 , s 3 ) ≤ d ( s 1 , s 2 )+ d ( s 2 , s 3 ) . Proof of Triangle Inequality :

  46. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) . ◮ Triangle Inequality: d ( s 1 , s 3 ) ≤ d ( s 1 , s 2 )+ d ( s 2 , s 3 ) . Proof of Triangle Inequality : ◮ Start with s 1 .

  47. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) . ◮ Triangle Inequality: d ( s 1 , s 3 ) ≤ d ( s 1 , s 2 )+ d ( s 2 , s 3 ) . Proof of Triangle Inequality : ◮ Start with s 1 . ◮ Change d ( s 1 , s 2 ) symbols to get s 2 .

  48. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) . ◮ Triangle Inequality: d ( s 1 , s 3 ) ≤ d ( s 1 , s 2 )+ d ( s 2 , s 3 ) . Proof of Triangle Inequality : ◮ Start with s 1 . ◮ Change d ( s 1 , s 2 ) symbols to get s 2 . ◮ Change d ( s 2 , s 3 ) symbols to get s 3 .

  49. Hamming Distance Given two strings s 1 and s 2 , the Hamming distance d ( s 1 , s 2 ) between two strings is the number of places where they differ. Properties: ◮ d ( s 1 , s 2 ) ≥ 0, with equality if and only if s 1 = s 2 . ◮ Symmetry: d ( s 1 , s 2 ) = d ( s 2 , s 1 ) . ◮ Triangle Inequality: d ( s 1 , s 3 ) ≤ d ( s 1 , s 2 )+ d ( s 2 , s 3 ) . Proof of Triangle Inequality : ◮ Start with s 1 . ◮ Change d ( s 1 , s 2 ) symbols to get s 2 . ◮ Change d ( s 2 , s 3 ) symbols to get s 3 . ◮ So s 1 and s 3 differ by at most d ( s 1 , s 2 )+ d ( s 2 , s 3 ) symbols.

  50. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1.

  51. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof .

  52. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original .

  53. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original . ◮ It gets corrupted to a string s with d ( c original , s ) ≤ k .

  54. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original . ◮ It gets corrupted to a string s with d ( c original , s ) ≤ k . ◮ Consider a different codeword c other .

  55. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original . ◮ It gets corrupted to a string s with d ( c original , s ) ≤ k . ◮ Consider a different codeword c other . ◮ Then, d ( c original , c other ) ≤ d ( c original , s )+ d ( s , c other ) .

  56. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original . ◮ It gets corrupted to a string s with d ( c original , s ) ≤ k . ◮ Consider a different codeword c other . ◮ Then, d ( c original , c other ) ≤ d ( c original , s )+ d ( s , c other ) . ◮ So, 2 k + 1 ≤ k + d ( s , c other ) .

  57. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original . ◮ It gets corrupted to a string s with d ( c original , s ) ≤ k . ◮ Consider a different codeword c other . ◮ Then, d ( c original , c other ) ≤ d ( c original , s )+ d ( s , c other ) . ◮ So, 2 k + 1 ≤ k + d ( s , c other ) . ◮ So, d ( s , c other ) ≥ k + 1.

  58. Hamming Distance & Error Correction Theorem : A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2 k + 1. Proof . ◮ Suppose we send the codeword c original . ◮ It gets corrupted to a string s with d ( c original , s ) ≤ k . ◮ Consider a different codeword c other . ◮ Then, d ( c original , c other ) ≤ d ( c original , s )+ d ( s , c other ) . ◮ So, 2 k + 1 ≤ k + d ( s , c other ) . ◮ So, d ( s , c other ) ≥ k + 1. ◮ So s is closer to c original than any other codeword.

  59. Reed-Solomon Codes Revisited Given a message m = ( m 0 , m 1 ,..., m n − 1 ) . . .

  60. Reed-Solomon Codes Revisited Given a message m = ( m 0 , m 1 ,..., m n − 1 ) . . . ◮ Define P m ( x ) = m n − 1 x n − 1 + ··· + m 1 x + m 0 .

  61. Reed-Solomon Codes Revisited Given a message m = ( m 0 , m 1 ,..., m n − 1 ) . . . ◮ Define P m ( x ) = m n − 1 x n − 1 + ··· + m 1 x + m 0 . ◮ Send the codeword ( 0 , P m ( 0 )) , ( 1 , P m ( 1 )) ,..., ( n + 2 k − 1 , P m ( n + 2 k − 1 )) .

  62. Reed-Solomon Codes Revisited Given a message m = ( m 0 , m 1 ,..., m n − 1 ) . . . ◮ Define P m ( x ) = m n − 1 x n − 1 + ··· + m 1 x + m 0 . ◮ Send the codeword ( 0 , P m ( 0 )) , ( 1 , P m ( 1 )) ,..., ( n + 2 k − 1 , P m ( n + 2 k − 1 )) . What are all the possible codewords?

  63. Reed-Solomon Codes Revisited Given a message m = ( m 0 , m 1 ,..., m n − 1 ) . . . ◮ Define P m ( x ) = m n − 1 x n − 1 + ··· + m 1 x + m 0 . ◮ Send the codeword ( 0 , P m ( 0 )) , ( 1 , P m ( 1 )) ,..., ( n + 2 k − 1 , P m ( n + 2 k − 1 )) . What are all the possible codewords? All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1.

  64. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1.

  65. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1. What is the minimum Hamming distance between distinct codewords?

  66. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c 1 : ( 0 , P 1 ( 0 )) , ( 1 , P 1 ( 0 )) ,..., ( n + 2 k − 1 , P 1 ( n + 2 k − 1 )) c 2 : ( 0 , P 2 ( 0 )) , ( 1 , P 2 ( 0 )) ,..., ( n + 2 k − 1 , P 2 ( n + 2 k − 1 ))

  67. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c 1 : ( 0 , P 1 ( 0 )) , ( 1 , P 1 ( 0 )) ,..., ( n + 2 k − 1 , P 1 ( n + 2 k − 1 )) c 2 : ( 0 , P 2 ( 0 )) , ( 1 , P 2 ( 0 )) ,..., ( n + 2 k − 1 , P 2 ( n + 2 k − 1 )) If d ( c 1 , c 2 ) ≤ 2 k , then:

  68. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c 1 : ( 0 , P 1 ( 0 )) , ( 1 , P 1 ( 0 )) ,..., ( n + 2 k − 1 , P 1 ( n + 2 k − 1 )) c 2 : ( 0 , P 2 ( 0 )) , ( 1 , P 2 ( 0 )) ,..., ( n + 2 k − 1 , P 2 ( n + 2 k − 1 )) If d ( c 1 , c 2 ) ≤ 2 k , then: P 1 and P 2 share n points.

  69. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c 1 : ( 0 , P 1 ( 0 )) , ( 1 , P 1 ( 0 )) ,..., ( n + 2 k − 1 , P 1 ( n + 2 k − 1 )) c 2 : ( 0 , P 2 ( 0 )) , ( 1 , P 2 ( 0 )) ,..., ( n + 2 k − 1 , P 2 ( n + 2 k − 1 )) If d ( c 1 , c 2 ) ≤ 2 k , then: P 1 and P 2 share n points. But n points uniquely determine degree ≤ n − 1 polynomials.

  70. Hamming Distance of Reed-Solomon Codes Codewords: All possible sets of n + 2 k points, which come from a polynomial of degree ≤ n − 1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c 1 : ( 0 , P 1 ( 0 )) , ( 1 , P 1 ( 0 )) ,..., ( n + 2 k − 1 , P 1 ( n + 2 k − 1 )) c 2 : ( 0 , P 2 ( 0 )) , ( 1 , P 2 ( 0 )) ,..., ( n + 2 k − 1 , P 2 ( n + 2 k − 1 )) If d ( c 1 , c 2 ) ≤ 2 k , then: P 1 and P 2 share n points. But n points uniquely determine degree ≤ n − 1 polynomials. So P 1 = P 2 .

Recommend


More recommend