compression prefix codes
play

Compression: Prefix Codes Greg Plaxton Theory in Programming - PowerPoint PPT Presentation

Compression: Prefix Codes Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin (Binary, Static) Code Maps each symbol a of a given finite alphabet A to a codeword w ( a ) in {


  1. Compression: Prefix Codes Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

  2. (Binary, Static) Code • Maps each symbol a of a given finite alphabet A to a codeword w ( a ) in { 0 , 1 } ∗ (i.e., a binary codeword) – The mapping is static , i.e., a is always encoded as w ( a ) , regardless of the surrounding context – So the mapping determines the encoder – But decoding can be problematic (why?) Theory in Programming Practice, Plaxton, Spring 2004

  3. Uniquely Decodable Code • A code is uniquely decodable if the associated encoder maps distinct input strings to distinct encoded strings – Necessary and sufficient for lossless decoding – Example of a code that is uniquely decodable? One that is not? • Let ℓ ( a ) denote the length of w ( a ) Theory in Programming Practice, Plaxton, Spring 2004

  4. Optimal Code • Suppose we are given a frequency f ( a ) for each symbol a in A f ( a ) – Let p ( a ) denote P b ∈ A f ( b ) – Note that p ( a ) may be viewed as a probability • We define the weight of a code as � a ∈ A p ( a ) · ℓ ( a ) • A code is optimal (for a given alphabet and associated probability distribution) if it has minimum weight over all uniquely decodable codes – Remark: Keep in mind that we are only talking about optimality with respect to the set of binary static codes; we will revisit this issue later Theory in Programming Practice, Plaxton, Spring 2004

  5. An Entropy-Based Lower Bound on Code Weight • Let H denote the entropy of the probability distribution associated with alphabet A , i.e., � H = − p ( a ) log p ( a ) a ∈ A • Theorem: The weight of any uniquely decodable code for A is at least H • Hint: Use the two inequalities given on the next slide and the fact that the logarithm function is concave over the positive reals Theory in Programming Practice, Plaxton, Spring 2004

  6. Two Inequalities • McMillan: Any uniquely decodable code satisfies 2 − ℓ ( a ) ≤ 1 � a ∈ A • Jensen: If λ 1 , . . . , λ n are nonnegative reals summing to 1 and f is a concave function over an interval containing the reals x 1 , . . . , x n then �� � � λ i · f ( x i ) ≤ f λ i · x i i i Theory in Programming Practice, Plaxton, Spring 2004

  7. Prefix Code • A prefix code is a code in which no codeword is the prefix of another – Uniquely decodable – Easy to decode • Exercise: Give an example of a code that is uniquely decodable but is not a prefix code Theory in Programming Practice, Plaxton, Spring 2004

  8. Kraft-McMillan Inequality • Kraft: For any sequence of integers such that ℓ 1 , . . . , ℓ | A | 1 ≤ i ≤| A | 2 − ℓ i ≤ 1 , there is a prefix code for A with codeword lengths � ℓ 1 , . . . , ℓ | A | • Since every uniquely decodable code satisfies McMillan’s inequality, we can restrict our attention to prefix codes in searching for an optimal code • McMillan’s inequality and the above result are often stated together (in two parts) and referred to as the Kraft-McMillan inequality Theory in Programming Practice, Plaxton, Spring 2004

  9. An Entropy-Based Upper Bound on the Weight of an Optimal Code • Theorem: There is an optimal (prefix) code for A with weight less than H + 1 • Hint: First use the Kraft-McMillan inequality to establish the existence 1 of a prefix code for where ℓ ( a ) = ⌈ log p ( a ) ⌉ for all a in A Theory in Programming Practice, Plaxton, Spring 2004

  10. Summary and Discussion of Entropy-Based Bounds • The weight of an optimal prefix code lies in the interval [ H, H + 1) • If H is high, then an optimal prefix code is guaranteed to achieve close to the best possible compression ratio achievable with any coding technique (static or not) – Here we are appealing to Shannon’s entropy bound • If H is close to zero, then an optimal prefix code might achieve a compression ratio that is dramatically worse than the best possible – Example? – Other compression techniques may be applied in such situations in order to achieve near-optimal performance (e.g., arithmetic coding or run-length coding) Theory in Programming Practice, Plaxton, Spring 2004

  11. Computing an Optimal Prefix Code • Huffman’s algorithm will be presented in the next lecture Theory in Programming Practice, Plaxton, Spring 2004

Recommend


More recommend