21. Dynamic Programming III FPTAS [Ottman/Widmayer, Kap. 7.2, 7.3, Cormen et al, Kap. 15,35.5] 575
Approximation Let ε ∈ (0 , 1) given. Let I opt an optimal selection. No try to find a valid selection I with � � v i ≥ (1 − ε ) v i . i ∈ I i ∈ I opt Sum of weights may not violate the weight limit. 576
Different formulation of the algorithm Before : weight limit w → maximal value v Reversed : value v → minimal weight w ⇒ alternative table g [ i, v ] provides the minimum weight with a selection of the first i items ( 0 ≤ i ≤ n ) that provide a value of exactly v ( 0 ≤ v ≤ � n i =1 v i ). 577
Computation Initially g [0 , 0] ← 0 g [0 , v ] ← ∞ (Value v cannot be achieved with 0 items.). Computation � g [ i − 1 , v ] falls v < v i g [ i, v ] ← min { g [ i − 1 , v ] , g [ i − 1 , v − v i ] + w i } sonst. incrementally in i and for fixed i increasing in v . Solution can be found at largest index v with g [ n, v ] ≤ w . 578
Example E = { (2 , 3) , (4 , 5) , (1 , 1) } v 0 1 2 3 4 5 6 7 8 9 ∅ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 (2 , 3) 0 ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ (4 , 5) 0 2 4 6 i (1 , 1) 0 1 ∞ 2 3 4 5 ∞ 6 7 579
Example E = { (2 , 3) , (4 , 5) , (1 , 1) } v 0 1 2 3 4 5 6 7 8 9 ∅ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 (2 , 3) 0 ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ (4 , 5) 0 2 4 6 i (1 , 1) 0 1 ∞ 2 3 4 5 ∞ 6 7 579
Example E = { (2 , 3) , (4 , 5) , (1 , 1) } v 0 1 2 3 4 5 6 7 8 9 ∅ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 (2 , 3) 0 ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ (4 , 5) 0 2 4 6 i (1 , 1) 0 1 ∞ 2 3 4 5 ∞ 6 7 579
Example E = { (2 , 3) , (4 , 5) , (1 , 1) } v 0 1 2 3 4 5 6 7 8 9 ∅ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 (2 , 3) 0 ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ (4 , 5) 0 2 4 6 i (1 , 1) 0 1 ∞ 2 3 4 5 ∞ 6 7 579
Example E = { (2 , 3) , (4 , 5) , (1 , 1) } v 0 1 2 3 4 5 6 7 8 9 ∅ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 (2 , 3) 0 ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ (4 , 5) 0 2 4 6 i (1 , 1) 0 1 ∞ 2 3 4 5 ∞ 6 7 579
Example E = { (2 , 3) , (4 , 5) , (1 , 1) } v 0 1 2 3 4 5 6 7 8 9 ∅ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0 (2 , 3) 0 ∞ ∞ 2 ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ (4 , 5) 0 2 4 6 i (1 , 1) 0 1 ∞ 2 3 4 5 ∞ 6 7 Read out the solution: if g [ i, v ] = g [ i − 1 , v ] then item i unused and continue with g [ i − 1 , v ] otherwise used and continue with g [ i − 1 , b − v i ] . 579
The approximation trick Pseduopolynomial run time gets polynmial if the number of occuring values can be bounded by a polynom of the input length. Let K > 0 be chosen appropriately . Replace values v i by “rounded v i = ⌊ v i /K ⌋ delivering a new input E ′ = ( w i , ˜ values” ˜ v i ) i =1 ...n . Apply the algorithm on the input E ′ with the same weight limit W . 580
Idea Example K = 5 Values 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , . . . , 98 , 99 , 100 → 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 2 , . . . , 19 , 19 , 20 Obviously less different values 581
Properties of the new algorithm Selection of items in E ′ is also admissible in E . Weight remains unchanged! Run time of the algorithm is bounded by O ( n 2 · v max /K ) ( v max := max { v i | 1 ≤ i ≤ n } ) 582
How good is the approximation? It holds that � v i � v i − K ≤ K · = K · ˜ v i ≤ v i K Let I ′ opt be an optimal solution of E ′ . Then | I opt |≤ n � � � � − n · K v i ≤ ( v i − K ) ≤ ( K · ˜ v i ) = K v i ˜ i ∈ I opt i ∈ I opt i ∈ I opt i ∈ I opt � � � ≤ K v i = ˜ K · ˜ v i ≤ v i . I ′ opt optimal i ∈ I ′ i ∈ I ′ i ∈ I ′ opt opt opt 583
Choice of K Requirement: � � v i ≥ (1 − ε ) v i . i ∈ I ′ i ∈ I opt Inequality from above: � � − n · K v i ≥ v i i ∈ I opt i ∈ I ′ opt � i ∈ I opt v i thus: K = ε . n 584
Choice of K � i ∈ I opt v i Choose K = ε . The optimal sum is unknown. Therefore we n choose K ′ = ε v max n . 34 i ∈ I opt v i and thus K ′ ≤ K and the It holds that v max ≤ � approximation is even slightly better. The run time of the algorithm is bounded by O ( n 2 · v max /K ′ ) = O ( n 2 · v max / ( ε · v max /n )) = O ( n 3 /ε ) . 34 We can assume that items i with w i > W have been removed in the first place. 585
FPTAS Such a family of algorithms is called an approximation scheme : the choice of ε controls both running time and approximation quality. The runtime O ( n 3 /ε ) is a polynom in n and in 1 ε . The scheme is therefore also called a FPTAS - Fully Polynomial Time Approximation Scheme 586
22. Greedy Algorithms Fractional Knapsack Problem, Huffman Coding [Cormen et al, Kap. 16.1, 16.3] 587
The Fractional Knapsack Problem set of n ∈ ◆ items { 1 , . . . , n } Each item i has value v i ∈ ◆ and weight w i ∈ ◆ . The maximum weight is given as W ∈ ◆ . Input is denoted as E = ( v i , w i ) i =1 ,...,n . Wanted: Fractions 0 ≤ q i ≤ 1 ( 1 ≤ i ≤ n ) that maximise the sum � n i =1 q i · v i under � n i =1 q i · w i ≤ W . 588
Greedy heuristics Sort the items decreasingly by value per weight v i /w i . Assumption v i /w i ≥ v i +1 /w i +1 Let j = max { 0 ≤ k ≤ n : � k i =1 w i ≤ W } . Set q i = 1 for all 1 ≤ i ≤ j . q j +1 = W − � j i =1 w i . w j +1 q i = 0 for all i > j + 1 . That is fast: Θ( n log n ) for sorting and Θ( n ) for the computation of the q i . 589
Correctness Assumption: optimal solution ( r i ) ( 1 ≤ i ≤ n ). The knapsack is full: � i r i · w i = � i q i · w i = W . Consider k : smallest i with r i � = q i Definition of greedy: q k > r k . Let x = q k − r k > 0 . Construct a new solution ( r ′ i ) : r ′ i = r i ∀ i < k . r ′ k = q k . Remove weight � n i = k +1 δ i = x · w k from items k + 1 to n . This works because � n i = k r i · w i = � n i = k q i · w i . 590
Correctness n n v k ( r i w i − δ i ) v i � � r ′ i v i = r k v k + xw k + w k w i i = k i = k +1 n v k v i v k � ≥ r k v k + xw k + r i w i − δ i w k w i w k i = k +1 n n v k v k v i � � = r k v k + xw k − xw k + r i w i = r i v i . w k w k w i i = k +1 i = k Thus ( r ′ i ) is also optimal. Iterative application of this idea generates the solution ( q i ) . 591
Huffman-Codes Goal: memory-efficient saving of a sequence of characters using a binary code with code words.. 592
Huffman-Codes Goal: memory-efficient saving of a sequence of characters using a binary code with code words.. Example File consisting of 100.000 characters from the alphabet { a, . . . , f } . a b c d e f Frequency (Thousands) 45 13 12 16 9 5 Code word with fix length 000 001 010 011 100 101 Code word variable length 0 101 100 111 1101 1100 592
Huffman-Codes Goal: memory-efficient saving of a sequence of characters using a binary code with code words.. Example File consisting of 100.000 characters from the alphabet { a, . . . , f } . a b c d e f Frequency (Thousands) 45 13 12 16 9 5 Code word with fix length 000 001 010 011 100 101 Code word variable length 0 101 100 111 1101 1100 File size (code with fix length): 300 . 000 bits. File size (code with variable length): 224 . 000 bits. 592
Huffman-Codes Consider prefix-codes: no code word can start with a different codeword. 593
Huffman-Codes Consider prefix-codes: no code word can start with a different codeword. Prefix codes can, compared with other codes, achieve the optimal data compression (without proof here). 593
Huffman-Codes Consider prefix-codes: no code word can start with a different codeword. Prefix codes can, compared with other codes, achieve the optimal data compression (without proof here). Encoding: concatenation of the code words without stop character (difference to morsing). affe → 0 · 1100 · 1100 · 1101 → 0110011001101 593
Huffman-Codes Consider prefix-codes: no code word can start with a different codeword. Prefix codes can, compared with other codes, achieve the optimal data compression (without proof here). Encoding: concatenation of the code words without stop character (difference to morsing). affe → 0 · 1100 · 1100 · 1101 → 0110011001101 Decoding simple because prefixcode 0110011001101 → 0 · 1100 · 1100 · 1101 → affe 593
Code trees 100 100 0 1 0 1 a:45 55 86 14 0 1 0 1 0 25 30 0 0 1 1 58 28 14 0 1 0 1 0 1 14 c:12 b:13 d:16 0 1 a:45 b:13 c:12 d:16 e:9 f:5 f:5 e:9 Code words with fixed length Code words with variable length 594
Properties of the Code Trees An optimal coding of a file is alway represented by a complete binary tree: every inner node has two children. 595
Recommend
More recommend