lecture 5 lossless coding ii
play

Lecture 5 Lossless Coding (II) May 20, 2009 Shujun LI ( ): - PowerPoint PPT Presentation

Shujun LI ( ): INF-10845-20091 Multimedia Coding Lecture 5 Lossless Coding (II) May 20, 2009 Shujun LI ( ): INF-10845-20091 Multimedia Coding Outline Review Arithmetic Coding Dictionary Coding Run-Length


  1. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Lecture 5 Lossless Coding (II) May 20, 2009

  2. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Outline � Review � Arithmetic Coding � Dictionary Coding � Run-Length Coding � Lossless Image Coding � Data Compression: Hall of Fame � References 1

  3. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Review

  4. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Image and video encoding: A big picture Differential Coding Motion Estimation and Compensation A/D Conversion Context-Based Coding Color Space Conversion … Predictive Pre-Filtering Coding Partitioning … Input Post- Image/Video Pre- Lossy Lossless Processing Processing Coding Coding (Post-filtering) Quantization Entropy Coding Transform Coding Dictionary-Based Coding Encoded Model-Based Coding Run-Length Coding Image/Video … … 3

  5. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding The ingredients of entropy coding � A random source ( X , P ) � A statistical model ( X , P’ ) as an estimation of the random source � An algorithm to optimize the coding performance (i.e., to minimize the average codeword length) � At least one designer … 4

  6. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding FLC, VLC and V2FLC � FLC = Fixed-length coding/code(s)/codeword(s) • Each symbol x i emitted from a random source ( X , P ) is encoded as an n -bit codeword, where | X | ≤ 2 n . � VLC = Variable-length coding/code(s)/codeword(s) • Each symbol x i emitted from a random source ( X , P ) is encoded as an n i -bit codeword. • FLC can be considered as a special case of VLC, where n 1 =…= n | X | . � V2FLC = Variable-to-fixed length coding/code(s)/codeword(s) • A symbol or a string of symbols is encoded as an n -bit codeword. • V2FLC can also be considered as a special case of VLC. 5

  7. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Static coding vs. Dynamic/Adaptive coding � Static coding = The statistical model P’ is static, i.e., it does not change over time. � Dynamic/Adaptive coding = The statistical model P’ is dynamically updated, i.e., it adapts itself to the context (i.e., changes over time). • Dynamic/Adaptive coding ⊂ Context-based coding � Hybrid coding = Static + Dynamic coding • A codebook is maintained at the encoder side, and the encoder dynamically chooses a code for a number of symbols and inform the decoder about the choice. 6

  8. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding A coding Shannon coding � Shannon-Fano coding � Huffman coding � Arithmetic coding (Range coding) � • Shannon-Fano-Elisa coding Universal coding � • Exp-Golomb coding (H.264/MPEG-4 AVC, Dirac) • Elias coding family , Levenshtein coding , … Non-universal coding � • Truncated binary coding, unary coding, … • Golomb coding ⊃ Rice coding Tunstall coding ⊂ V2FLC David Salomon, Variable-length Codes � for Data Compression , Springer, 2007 … � 7

  9. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Shannon-Fano Code: An example � X ={A,B,C,D,E}, P ={0.35,0.2,0.19,0.13,0.13}, Y ={0,1} A Possible Code 0.19+0.13+0.13 0.35+0.2 A, B A 00 C, D, E B 01 0.13+0.13 0.19 C 10 0.35 0.2 D, E A B C D 110 E 111 0.13 0.13 D E 8

  10. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Universal coding (code) � A code is called universal if L ≤ C 1 ( H + C 2 ) for all possible values of H , where C 1 , C 2 ≥ 1. • You may see a different definition somewhere, but the basic idea remains the same – a universal code works like an optimal code, except there is a bound defined by a constant C 1 . � A universal code is called asymptotically optimal if C 1 → 1 when H →∞ . 9

  11. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Coding positive/non-negative integers � Naive binary coding • | X |=2 k : k -bit binary representation of an integer. � Truncated binary coding • | X |=2 k + b : X ={0,1,…,2 k + b -1} 2 k − b − 1 ⇒ b 0 · · · b k − 1 0 ⇒ 0 · · · 0 2 k − b ⇒ b 0 · · · b k 2 k + b − 1 ⇒ 1 · · · 1 | {z } | {z } ////////////////////// k k +1 � Unary code (Stone-age binary coding) • | X |= ∞ : X = Z + ={1,2,…} f ( x ) = 0 · · · 0 1 or 1 · · · 1 0 | {z } | {z } x − 1 x − 1 10

  12. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Golomb coding and rice coding � Golomb coding = Unary coding + Truncated binary coding • An integer x is divided into two parts (quotient and remainder) according to a parameter M : , q = b x/M c r =mod( x , M )= x - q*M . • Golumb code = unary code of q + truncated binary code of r . • When M =1, Golomb coding = unary coding. • When M =2 k , Golomb coding = Rice coding. • Golomb code is the optimal code for the geometric distribution: Prob( x = i )=(1- p ) i -1 p , where 0< p <1. 11

  13. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Exp-Golomb coding (Universal) � Exp-Golomb coding ≠ Golomb coding � Exp-Golomb coding of order k =0 is used in some video coding standards such as H.264. � The encoding process • | X |= ∞ : X ={0,1,2,…} • Calculate , and . q = b x/ 2 k c + 1 n q = b log 2 q c • Exp-Golomb code = unary code of n q + n q LSBs of q + k -bit representation of r =mod( x ,2 k )= x - q *2 k . 12

  14. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Huffman code: An example � X ={1,2,3,4,5}, P =[0.4,0.2,0.2,0.1,0.1]. p 1+2+3+4+5 =1 A Possible Code 1 00 p 1+2 =0.6 p 3+4+5 =0.4 2 01 3 10 p 1 =0.4 p 2 =0.2 p 4+5 =0.2 4 110 p 3 =0.2 5 111 p 4 =0.1 p 5 =0.1 13

  15. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Huffman code: An optimal code � Relation between Huffman code and Shannon code: H ≤ L Huffman ≤ L Huffman-Fano ≤ L Shannon < H +1 � A stronger result (Gallager, 1978) • When p max ≥ 0.5, L Huffman ≤ H + p max < H +1 • When p max <0.5, L Huffman ≤ H + p max +log 2 (2(log 2 e )/ e ) ≈ H + p max +0.086<H+0.586 � Huffman’s rules of optimal codes imply that Huffman code is optimal . • When each p i is a negative power of 2, Huffman code reaches the entropy. 14

  16. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Huffman code: Small X problem � Problem • When | X | is small, the coding performance is less obvious. • As a special case, when | X |=2, Huffman coding cannot compress the data at all – no matter what the probability is, each symbol has to be encoded as a single bit. � Solutions • Solution 1: Work on X n rather than X . • Solution 2: Dual tree coding = Huffman coding + Tunstall coding 15

  17. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Huffman code: Variance problem � Problem • There are multiple choices of two smallest probabilities, if more than two nodes have the same probability during any step of the coding process. • Huffman codes with a larger variance may cause trouble for data transmissions via a CBR (constant bit rate) channel – a larger buffer is needed. � Solution • Shorter subtrees first. (A single node’s height is 0.) 16

  18. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Modified Huffman code � Problem • If | X | is too large, the construction of the Huffman tree will be too long and the memory used for the tree will be too demanding. � Solution • Divide X into two set X 1 ={ s i | p ( s i )>2 - v }, X 2 ={ s i | p ( s i ) ≤ 2 - v }. • Perform Huffman coding for the new set X 3 = X 1 ∪ { X 2 }. • Append f ( X 2 ) as the prefix of naive binary representation of all symbols in X 2 . 17

  19. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Huffman’s rules of making optimal codes � Source statistics : P = P 0 =[ p 1 ,…, p m ], where p 1 ≥ … ≥ p m -1 ≥ p m . � Rule 1 : L 1 ≤ … ≤ L m -1 = L m . � Rule 2 : If L 1 ≤ … ≤ L m -2 < L m -1 = L m , L m -1 and L m differs from each other only for the last bit, i.e., f ( x m -1 )= b 0 and f ( x m )= b 1, where b is a sequence of L m -1 bits. � Rule 3 : Each possible bit sequence of length L m -1 must be either a codeword or the prefix of some codewords. Answers : Read Section 5.2.1 (pp. 122-123) of the following book – Yun Q. Shi and Huifang Sun, Image and Video Compression for Multimedia Engineering , 2 nd Edition, CRC Press, 2008 18

  20. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Justify Huffman’s rules � Rule 1 • If L i > L i +1 , we can swap them to get a smaller average codeword length (when P i = P i +1 it does not make sense, though). • If L m -1 < L m , then L 1 ,…, L m -1 < L m and the last bit of L m is redundant. � Rule 2 • If they do not have the same parent node, then the last bits of both codewords are redundant. � Rule 3 • If there is an unused bit sequence of length L m -1 , we can use it for L m . 19

  21. Shujun LI ( 李树钧 ): INF-10845-20091 Multimedia Coding Arithmetic Coding

Recommend


More recommend