dynamic programming
play

Dynamic programming 1 Dynamic programming also solve a problem by - PowerPoint PPT Presentation

Chapter 3 Dynamic programming 1 Dynamic programming also solve a problem by combining the solutions to subproblems. But dynamic programming considers the situation that some subproblems will be called repeatedly an thus need to avoid


  1. Chapter 3 Dynamic programming 1

  2. • Dynamic programming also solve a problem by combining the solutions to subproblems. • But dynamic programming considers the situation that some subproblems will be called repeatedly an thus need to avoid repeated work. 2

  3. A typical application of the dynamic programming is for optimization problems. 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution, typically in a bottom-up fashion. 4. Construct an optimal solution from computed information. 3

  4. Rod cutting problem The rod cutting problem is the following. Given a rod of length n inches and a table of price p i for i = 1 , . . . , n , determine the maximum revenue r n obtainable by cutting up the rod and selling the pieces. The following is an example of price table. length i 1 2 3 4 5 6 7 8 9 10 price p i 1 5 8 9 19 17 17 20 24 30 4

  5. • For n = 4, we may cut as: (1 , 1 , 1 , 1) , (1 , 1 , 2) , (2 , 2) , (1 , 3) , (4), the correspondent prices are: 4 , 7 , 10 , 9 , 9, respectively. • So the optimal revenue is cutting the 4-inch rod into two 2-inch pieces. 5

  6. By inspection, we can obtain the optimal decomposition as follows. r 1 = 1 from solution 1 = 1 (no cuts) r 2 = 5 from solution 2 = 2 (no cuts) r 3 = 8 from solution 3 = 3 (no cuts) r 4 = 10 from solution 4 = 2 + 2 r 5 = 13 from solution 5 = 2 + 3 r 6 = 17 from solution 6 = 6 (no cut) r 7 = 18 from solution 7 = 1 + 6 or 7 = 2 + 2 + 3 r 8 = 22 from solution 8 = 2 + 6 r 9 = 25 from solution 9 = 3 + 6 r 10 = 30 from solution 10 = 10 (no cuts) 6

  7. In general, for a rod of length n , we can consider 2 n − 1 different cutting ways, since we have an independent option of cutting or not cutting at distance i inches from one end. Suppose an optimal solution cuts the rod into k pieces with lengths i 1 , i 2 , . . . , i k . Then n = i 1 + i 2 + · · · + i k and the corresponding optimal revenue is r n = p i 1 + p i 2 + · · · + p i k . 7

  8. Our purpose is to compute r n for given n and p i , i = 1 , . . . , n . When we consider dividing the problem, we can use the following method: r n = max( p n , r 1 + r n − 1 , r 2 + r n − 2 , · · · , r n − 1 + r 1 ) The first case is no cutting. The other cases consider optimal substructure: optimal solutions to a problem incorporate optimal solutions to related subproblems, which we may solve independently. 8

  9. A little simplify the above method, we can consider the cases that the first cut is of length i then r n = max 1 ≤ i ≤ n ( p i + r n − i ) . In this formulation, the solution embodies the solution to only one related subproblem. 9

  10. The following procedure implements the method. The inputs of the procedure are the the length n and the price p [1 , . . . , n ] 1: procedure Cut-Rod ( p, n ) if n == 0 then 2: return 0 3: end if 4: q = −∞ 5: for i = 1 to n do 6: q = max( q, p [ i ]+ Cut-Rod ( p, n − i )) 7: end for 8: return q 9: 10: end procedure Using a simple induction on n proves that the answer of the procedure is equal to r n . 10

  11. This procedure is very inefficient. This is because the Cut-Rod procedure calls itself recursively again and again. Suppose the running time of the procedure is T ( n ). Then we have the recurrence n − 1 ∑ T ( n ) = 1 + T ( j ) . j =0 It is easy to prove that T ( n ) = 2 n by mathematical induction. So the running time for Cut-rod is exponential in n . 11

  12. To see why the procedure is inefficient, we draw the recursion tree of Cut-Rod for n = 4 in Figure 1. In the tree, each vertex is a procedure calling. The number in the vertex is the parameter n . Figure 1: Recursion tree for Cut-Rod ( p, 4) 12

  13. • From the recursion tree, we see that the same subproblem is computed again and again. • In this example, Cut-Rod ( p, 1) computed 4 times, Cut-Rod ( p, 0) computed 8 times, etc. • To improve the method, we will use dynamic-programming method. • The main idea of the dynamic-programming is to arrange for each subproblem to be solved only once. Each time a subproblem is solved, the result will be stored for the next calling. So next time, when we need to solve this subproblem, we need just look it up. Dynamic-programming uses additional memory to save the computation time. 13

  14. There are two ways to implement a dynamic-programming approach. • The first approach is top-down with memoization. In this approach, the procedure runs recursively in a nature manner, but modified to save the result of each subproblem (in an array or hash table). The procedure now first checks to see if the subproblem has previous solved or not. If so, it just returns the saved result; if not, the procedure computes the result in the usual manner and returns and saves the result. 14

  15. • The second approach is the bottom-up method. This approach typically depends on some natural notion of the “size” of a subproblem, such that solving any particular subproblem depends only on solving “smaller” subproblems. We sort the subproblems by size and solve them in order, smallest first. Each subproblem is solved once. When solve a subproblem, the prerequisite subproblems are already solved. 15

  16. The top-down approach for the cut rod problem is as follows. 1: procedure Memoized-Cut-Rod ( p, n ) let r [0 . . . n ] be a new array 2: for i = 0 to n do 3: r [ i ] = −∞ 4: end for 5: return Memoized-Cut-Rod-Aux ( p, n, r ) 6: 7: end procedure 16

  17. 1: procedure Memoized-Cut-Rod-Aux (( p, n )) if r [ n ] ≥ 0 then 2: return r [ n ] 3: end if 4: if n == 0 then 5: q = 0 6: else 7: q = −∞ 8: for i = 1 to n do 9: q = max( q, p [ i ]+ Memoized-Cut-Rod-Aux ( p, n − i, r )) 10: end for 11: end if 12: r [ n ] = q 13: 14: return q 15: end procedure 17

  18. • The main procedure Memoized-Cut-Rod just initializes an auxiliary array r and then calls Memoized-Cut-Rod-Aux . • The later is the memoized version of the Cut-Rod . It returns the result from the auxiliary array if the result exists. Otherwise it computes the result. 18

  19. The bottom-up version is as follows. 1: procedure Bottom-Up-Cut-Rod-Aux (( p, n )) let r [0 . . . n ] be a new array 2: r [0] = 0 3: for j = 1 to n do 4: q = −∞ 5: for i = 1 to j do 6: q = max( q, p [ i ] + r [ j − i ]) 7: end for 8: r [ j ] = q 9: end for 10: return r [ n ] 11: 12: end procedure 19

  20. • The above procedure first creates a new array r , then calculate the values of r from the smallest to the largest. • When compute r [ j ], all the values of r [ j − i ] have been computed. Therefor the line 7 just use these value instead of using recursive calling. 20

  21. • The running time of the Bottom-Up-Cut-Rod is Θ( n 2 ), because there is a double-nested for loop. • The running time of the top-down approach is also Θ( n 2 ). • Although the line 10 of Memoized-Cut-Rod-Aux uses recursive calling, each value of r [ i ] just computes once. Therefore the total number of iterations of its for loop forms an arithmetic series, which gives total of Θ( n 2 ) iterations. 21

  22. When we think about a dynamic-programming problem, it is important for us to understand the set of subproblems involved and how they depend on one another. We can use subproblem graph for these information. The Figure 2 is the subproblem graph for the cut rod problem with n = 4. 22

  23. Figure 2: Subproblem graph for cut-rod problem 23

  24. • The subproblem graph is a digraph, in which each vertex represents a distinct subproblem, and each arc represents that an optimal solution of a subproblem needs the solution of the other subproblem. • For the top-down approach, in Figure 2 shows the vertex 4 needs a solution of vertex 3, the vertex 3 needs a solution of 2, etc. • The bottom-up approach first solves the vertex 1 from vertex 0, then solves vertex 2 from vertices 0 and 1, etc. 24

  25. • The size of the subproblem graph can help us determine the running time of the dynamic programming algorithm. • Since each subproblem is solved only once, the running time is the sum of the times needed to solve each subproblem. • Typically, the time to compute the solution of a subproblem is proportional to the degree of the corresponding vertex in the subproblem graph, and the number of subproblems is equal to the number of vertices in the graph. In this common case, the running time of dynamic programming is linear in the number of vertices and edges. 25

  26. The above dynamic programming solutions of the cut rod problem just give the value of the optimal revenue, but not the actual solutions (how to cut the rod). The following extended version of Bottom-Up-Cut-Rod not only returns the optimal value, but also returns a choice that led to the optimal value. 26

  27. 1: procedure Extended-Bottom-Up-Cut-Rod (( p, n )) let r [0 . . . n ] and s [0 . . . n ] be a new arrays 2: r [0] = 0 3: for j = 1 to n do 4: q = −∞ 5: for i = 1 to j do 6: if q < p [ i ] + r [ j − i ] then 7: q = p [ i ] + r [ j − i ] 8: s [ j ] = i ▷ s [ j ] records the size of the first cut to 9: the rod of size j end if 10: end for 11: r [ j ] = q 12: end for 13: return r and s 14: 15: end procedure 27

Recommend


More recommend