��������������������� Dynamic Programming Greedy. Build up a solution incrementally, myopically optimizing Introduction, Weighted Interval Scheduling some local criterion. Divide-and-conquer. Break up a problem into independent subproblems, Tyler Moore solve each subproblem, and combine solution to subproblems to form solution to original problem. CSE 3353, SMU, Dallas, TX Dynamic programming. Break up a problem into a series of overlapping Lecture 15 subproblems, and build up solutions to larger and larger subproblems. fancy name for caching away intermediate results Some slides created by or adapted from Dr. Kevin Wayne. For more information see in a table for later reuse http://www.cs.princeton.edu/~wayne/kleinberg-tardos . Some code reused from Python Algorithms by Magnus Lie Hetland. 2 2 / 28 ��������������������������� �������������������������������� Bellman. Pioneered the systematic study of dynamic programming in 1950s. Areas. � Bioinformatics. Etymology. � Control theory. � Dynamic programming = planning over time. � Information theory. � Secretary of Defense was hostile to mathematical research. � Operations research. � Bellman sought an impressive name to avoid confrontation. � Computer science: theory, graphics, AI, compilers, systems, …. � ... Some famous dynamic programming algorithms. � Unix diff for comparing two files. � Viterbi for hidden Markov models. � De Boor for evaluating spline curves. � Smith-Waterman for genetic sequence alignment. � Bellman-Ford for shortest path routing in networks. � Cocke-Kasami-Younger for parsing context-free grammars. � ... 3 4 3 / 28 4 / 28
Recurrence relations Computing Fibonacci numbers Recall that recurrence relations are equations de fi ned in terms of themselves. They are useful because many natural functions and Fibonacci sequence can be de fi ned using the following recurrence: recursive functions can easily expressed as recurrences F n = F n − 1 + F n − 2 , F 0 = 0 , F 1 = 1 Recurrence Solution Example application F 2 = F 1 + F 0 = 1 + 0 = 1 T ( n ) = T ( n / 2) + 1 Θ (lg n ) Binary Search F 3 = F 2 + F 1 = 1 + 1 = 2 T ( n ) = T ( n / 2) + n Θ ( n ) Randomized Quickselect (avg. case) F 4 = F 3 + F 2 = 2 + 1 = 3 T ( n ) = 2 T ( n / 2) + 1 Θ ( n ) Tree traversal F 5 = F 4 + F 3 = 3 + 2 = 5 T ( n ) = 2 T ( n / 2) + n Θ ( n lg n ) Mergesort T ( n ) = T ( n − 1) + 1 Θ ( n ) Processing a sequence F 6 = F 5 + F 4 = 5 + 3 = 8 Θ ( n 2 ) T ( n ) = T ( n − 1) + n Handshake problem Θ (2 n ) T ( n ) = 2 T ( n − 1) + 1 Towers of Hanoi Θ (2 n ) T ( n ) = 2 T ( n − 1) + n T ( n ) = nT ( n − 1) Θ ( n !) 5 / 28 6 / 28 Computing Fibonacci numbers with recursion Recursion Tree for Fibonacci function F (6) F (5) F (4) F (4) F (3) F (3) F (2) F (3) F (2) F (2) F (1) F (2) F (1) F (1) F (0) f i b ( i ) : def i < 2: i i f return F (2) F (1) F (1) F (0) F (1) F (0) F (1) F (0) f i b ( i − 1) + f i b ( i − 2) return F (1) F (0) We know that T ( n ) = 2 T ( n − 1) + 1 = Θ (2 n ) It turns out that T ( n ) = T ( n − 1) + T ( n − 2) ≈ Θ (1 . 6 n ) Since our recursion tree has 0 and 1 as leaves, computing F n requires ≈ 1 . 6 n recursive function calls! 7 / 28 8 / 28
Computing Fibonacci numbers with memoization (manual) Recursion tree for Fibonacci function with memoization F (6) def fib memo ( i ) : F (5) F (4) mem = {} #d i c t of cached v a l u e s def f i b ( x ) : F (4) F (3) F (3) F (2) x < 2: return x i f #check i f a l r e a d y computed F (3) F (2) F (2) F (1) F (2) F (1) F (1) F (0) i f x in mem: return mem[ x ] F (2) F (1) F (1) F (0) F (1) F (0) F (1) F (0) #only i f not a l r e a d y computed mem[ x ] = f i b ( x − 1) + f i b ( x − 2) F (1) F (0) return mem[ x ] f i b ( i ) Black nodes: no longer computed due to memoization return mem[ i ] Caching reduced the # of operations from exponential to linear time! 9 / 28 10 / 28 Computing Fibonacci numbers with memoization Code for the memo wrapper (automatic) f u n c t o o l s import wraps from def memo( func ) : cache = {} # Stored subproblem s o l u t i o n s @wraps ( func ) # Make wrap look l i k e func > > > @memo def wrap ( ∗ args ) : # The memoized wrapper . . . def f i b ( i ) : i f args not in cache : # Not a l r e a d y computed? . . . i < 2: i i f return cache [ args ] = func ( ∗ args ) # Compute & cache the s o l u t i o n . . . return f i b ( i − 1) + f i b ( i − 2) return cache [ args ] # Return the cached s o l u t i o n . . . return wrap # Return the wrapper > > > f i b (100) 354224848179261915075L Example of Python’s capability as a functional language Provides cache functionality for recursive functions in general What sort of magic is going on here? The memo function takes a function as input, then “wraps the function with the added functionality The @wraps statement makes the memo function a decorator 11 / 28 12 / 28
Discussion of dynamic memoization Computing Fibonacci numbers with dynamic programming Even if the code is a bit of a mystery, don’t worry, you can still use it def f i b i t e r ( i ) : by including the code on the last slide with yours, then making the i f i < 2: return i fi rst line before your function de fi nition decorated by ‘@memo’ #s t o r e the sequence in a l i s t mem=[0 ,1] If you don’t have access to a programming language supporting j range (2 , i +1): for in dynamic memoization, you can either do it manually or turn to #i n c r e m e n t a l l y b u i l d the sequence dynamic programming mem. append (mem[ j − 1]+mem[ j − 2]) Dynamic programming converts recursive code to an iterative version return mem[ − 1] that executes e ffi ciently 13 / 28 14 / 28 Avoiding recomputation by storing partial results 6. D YNAMIC P ROGRAMMING I ‣ weighted interval scheduling The trick to dynamic programming is to see that the naive recursive ‣ segmented least squares algorithm repeatedly computes the same subproblems over and over ‣ knapsack problem and over again. If so, storing the answers to them in a table instead ‣ RNA secondary structure of recomputing can lead to an e ffi cient algorithm. Thus we must fi rst hunt for a correct recursive algorithm – later we can worry about speeding it up by using a results matrix. � ������� ������� 15 / 28 16 / 28
���������������������������� ���������������������������������� Weighted interval scheduling problem. Earliest finish-time first. � Job � starts at � � , finishes at � � , and has weight or value � � . � Consider jobs in ascending order of finish time. � Two jobs compatible if they don't overlap. � Add job to subset if it is compatible with previously chosen jobs. � Goal: find maximum weight subset of mutually compatible jobs. Recall. Greedy algorithm is correct if all weights are 1. a Observation. Greedy algorithm fails spectacularly for weighted version. b c d e f b weight = 999 g a weight = 1 h h ���� ���� 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 6 7 17 / 28 18 / 28 ���������������������������� ����������������������������������� Notation. Label jobs by finishing time: � ��� ≤ �� � ��� ≤ ������� ≤ � � � � . Notation. ��� � � ��� value of optimal solution to the problem consisting of job requests ����������� � . Def. � � � � � � ��� largest index ������ such that job � is compatible with � . Ex. � ��������� � ��������� � �������� Case 1. ��� selects job � . � Collect profit � � . � Can't use incompatible jobs �� � � � ������� � � � ������������� � ������ . � Must include optimal solution to problem consisting of remaining 1 compatible jobs ������������ � � � � . 2 optimal substructure property (proof via exchange argument) 3 Case 2. ��� does not select job � . � Must include optimal solution to problem consisting of remaining 4 compatible jobs ������������ �� ��� . 5 6 ⎧ � � ����� � � 7 ��� � � � � ⎨ � � � � ��� � � � � ��� ��� � � − �� ��� � � ��������� �� ⎩ � 8 ���� 0 1 2 3 4 5 6 7 8 9 10 11 8 9 19 / 28 20 / 28
Recommend
More recommend