W. I. S.: Iterative Table Fill-In FindScheduleLength(I, p) ℓ [0] = 0 1 for j = 1 to n 2 do ℓ [j] = max( ℓ [j – 1], ℓ [p[j]] + |I[j]|) 3 return ℓ [n] 4 Running time: O(n) Advantage over memoization: • No need for recursion. • Algorithm is often simpler. Disadvantage over memoization: • Need to worry about the order in which the table entries are computed: • All entries needed to compute the current entry need to be computed first. • Memoization computes table entries as needed.
W. I. S.: Computing the Set of Intervals FindSchedule(I, p) ℓ [0] = 0 1 S[0] = [ ] 2 for j = 1 to n 3 do if ℓ [j – 1] > ℓ [p[j]] + |I[j]| 4 then ℓ [j] = ℓ [j – 1] 5 S[j] = S[j – 1] 6 else ℓ [j] = ℓ [p[j]] + |I[j]| 7 S[j] = [I[j]] ++ S[p[j]] 8 return S[n] 9
W. I. S.: Computing the Set of Intervals FindSchedule(I, p) ℓ [0] = 0 1 S[0] = [ ] 2 for j = 1 to n 3 do if ℓ [j – 1] > ℓ [p[j]] + |I[j]| 4 then ℓ [j] = ℓ [j – 1] 5 S[j] = S[j – 1] 6 else ℓ [j] = ℓ [p[j]] + |I[j]| 7 S[j] = [I[j]] ++ S[p[j]] 8 return S[n] 9 Running time: O(n)
W. I. S.: Computing the Set of Intervals FindSchedule(I, p) ℓ [0] = 0 1 S[0] = [ ] 2 for j = 1 to n 3 do if ℓ [j – 1] > ℓ [p[j]] + |I[j]| 4 then ℓ [j] = ℓ [j – 1] 5 S[j] = S[j – 1] 6 else ℓ [j] = ℓ [p[j]] + |I[j]| 7 S[j] = [I[j]] ++ S[p[j]] 8 return S[n] 9 Running time: O(n) This computes the sequence of intervals ordered from last to first. This list is of course easy to reverse in linear time.
W. I. S.: The Missing Details What’s missing? • Sort the intervals by their ending times. • Compute the predecessor array p.
W. I. S.: The Missing Details What’s missing? • Sort the intervals by their ending times. • Compute the predecessor array p. Solution: • Sorting is easily done in O(n lg n) time. • To compute p[j], perform binary search with I[j]’s starting time on the sorted array of ending times.
W. I. S.: The Missing Details What’s missing? • Sort the intervals by their ending times. • Compute the predecessor array p. Solution: • Sorting is easily done in O(n lg n) time. • To compute p[j], perform binary search with I[j]’s starting time on the sorted array of ending times. Theorem: The weighted interval scheduling problem can be solved in O(n lg n) time.
The Dynamic Programming Technique The technique: • Develop a recurrence expressing the optimal solution for a given problem instance in terms of optimal solutions for smaller problem instances: • Evaluate this recurrence • Recursively using memoization or • Using iterative table fill-in.
The Dynamic Programming Technique The technique: • Develop a recurrence expressing the optimal solution for a given problem instance in terms of optimal solutions for smaller problem instances: • Evaluate this recurrence • Recursively using memoization or • Using iterative table fill-in. For this to work, the problem must exhibit the optimal substructure property: The optimal solution to a problem instance must be composed of optimal solutions to smaller problem instances.
The Dynamic Programming Technique The technique: • Develop a recurrence expressing the optimal solution for a given problem instance in terms of optimal solutions for smaller problem instances: • Evaluate this recurrence • Recursively using memoization or • Using iterative table fill-in. For this to work, the problem must exhibit the optimal substructure property: The optimal solution to a problem instance must be composed of optimal solutions to smaller problem instances. A speed-up over the naïve recursive algorithm is achieved if the problem exhibits overlapping subproblems: The same subproblem occurs over and over again in the recursive evaluation of the recurrence.
Developing a Dynamic Programming Algorithm Step 1: Think top-down: • Consider an optimal solution (without worrying about how to compute it). • Identify how the optimal solution of any problem instance decomposes into optimal solutions to smaller problem instances. • Write down a recurrence based on this analysis. Step 2: Formulate the algorithm, which computes the solution bo t om-up: • Since an optimal solution depends on optimal solutions to smaller problem instances, we need to compute those first.
Sequence Alignment Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”.
Sequence Alignment Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”. Can Google read your mind?
Sequence Alignment Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”. Can Google read your mind? No! They use a clever algorithm to match your mistyped query against the phrases they have in their database. “Dalhousie” is the closest match to “Dalhusy” they find.
Sequence Alignment Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”. Can Google read your mind? No! They use a clever algorithm to match your mistyped query against the phrases they have in their database. “Dalhousie” is the closest match to “Dalhusy” they find. What’s a good similarity criterion?
Sequence Alignment Problem: Given two strings X = x 1 x 2 · · · x m and Y = y 1 y 2 · · · y n , extend them to two strings X ′ = x ′ t and Y ′ = y ′ 1 x ′ 2 · · · x ′ 1 y ′ 2 · · · y ′ t of the same length by inserting gaps so that the following dissimilarity measure D(X ′ , Y ′ ) is minimized: t D(X ′ , Y ′ ) = � d(x ′ i , y ′ i ) i=1 � x = or y = (gap penalty) δ d(x, y) = otherwise (mismatch penalty) µ x,y
Sequence Alignment Problem: Given two strings X = x 1 x 2 · · · x m and Y = y 1 y 2 · · · y n , extend them to two strings X ′ = x ′ t and Y ′ = y ′ 1 x ′ 2 · · · x ′ 1 y ′ 2 · · · y ′ t of the same length by inserting gaps so that the following dissimilarity measure D(X ′ , Y ′ ) is minimized: t D(X ′ , Y ′ ) = � d(x ′ i , y ′ i ) i=1 � x = or y = (gap penalty) δ d(x, y) = otherwise (mismatch penalty) µ x,y Example: Dalh␣usy␣ Dalhousie D(X ′ , Y ′ ) = 2 δ + µ iy
Sequence Alignment Problem: Given two strings X = x 1 x 2 · · · x m and Y = y 1 y 2 · · · y n , extend them to two strings X ′ = x ′ t and Y ′ = y ′ 1 x ′ 2 · · · x ′ 1 y ′ 2 · · · y ′ t of the same length by inserting gaps so that the following dissimilarity measure D(X ′ , Y ′ ) is minimized: t D(X ′ , Y ′ ) = � d(x ′ i , y ′ i ) i=1 � x = or y = (gap penalty) δ d(x, y) = otherwise (mismatch penalty) µ x,y Example: Dalh␣usy␣ Dalhousie D(X ′ , Y ′ ) = 2 δ + µ iy Another (more important?) application: DNA sequence alignment to measure the similarity between di ff erent DNA samples.
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ?
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n • x ′ t = x m and y ′ t =
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n • x ′ t = x m and y ′ t = • x ′ and y ′ t = t = y n
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n–1 ).
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n–1 ). Assume there’s a be t er alignment (x ′′ 1 x ′′ 2 · · · x ′′ s , y ′′ 1 y ′′ 2 · · · y ′′ s ) with dissimilarity s t–1 � � d(x ′′ i , y ′′ d(x ′ i , y ′ i ) < i ). i=1 i=1
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n–1 ). Assume there’s a be t er alignment (x ′′ 1 x ′′ 2 · · · x ′′ s , y ′′ 1 y ′′ 2 · · · y ′′ s ) with dissimilarity s t–1 � � d(x ′′ i , y ′′ d(x ′ i , y ′ i ) < i ). i=1 i=1 Then (x ′′ 1 x ′′ 2 · · · x ′′ s x ′ t , y ′′ 1 y ′′ 2 · · · y ′′ s y ′ t ) is an aligment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ) with dissimilarity s t–1 t � � � d(x ′′ i , y ′′ i ) + d(x ′ t , y ′ d(x ′ i , y ′ i ) + d(x ′ t , y ′ d(x ′ i , y ′ t ) < t ) = i ), i=1 i=1 i=1 a contradiction.
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n–1 ). • x ′ t = x m and y ′ t = (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n ).
Sequence Alignment: Problem Analysis Assume (x ′ 1 x ′ 2 · · · x ′ t , y ′ 1 y ′ 2 · · · , y ′ t ) is an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n ). What choices do we have for the final pair (x ′ t , y ′ t ) ? • x ′ t = x m and y ′ t = y n (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n–1 ). • x ′ t = x m and y ′ t = (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m–1 , y 1 y 2 · · · y n ). • x ′ and y ′ t = t = y n (x ′ 1 x ′ 2 · · · x ′ t–1 , y ′ 1 y ′ 2 · · · y ′ t–1 ) must be an optimal alignment for (x 1 x 2 · · · x m , y 1 y 2 · · · y n–1 ).
Sequence Alignment: The Recurrence Let D(i, j) be the dissimilarity of the strings x 1 x 2 · · · x i and y 1 y 2 · · · y j .
Sequence Alignment: The Recurrence Let D(i, j) be the dissimilarity of the strings x 1 x 2 · · · x i and y 1 y 2 · · · y j . We are interested in D(m, n).
Sequence Alignment: The Recurrence Let D(i, j) be the dissimilarity of the strings x 1 x 2 · · · x i and y 1 y 2 · · · y j . We are interested in D(m, n). Recurrence: δ · j i = 0 D(i, j) = δ · i j = 0 min(D(i – 1, j – 1) + µ x i ,y j , D(i, j – 1) + δ , D(i – 1, j) + δ ) otherwise
Sequence Alignment: The Algorithm SequenceAlignment(X, Y, µ , δ ) D[0, 0] = 0 1 A[0, 0] = [ ] 2 for i = 1 to m 3 do D[i, 0] = D[i – 1, 0] + δ 4 A[i, 0] = [(X[i], )] ++ A[i – 1, 0] 5 for j = 1 to n 6 do D[0, j] = D[0, j – 1] + δ 7 A[0, j] = [( , Y[j])] ++ A[0, j – 1] 8 for i = 1 to m 9 do for j = 1 to n 10 do D[i, j] = D[i – 1, j – 1] + µ [X[i], Y[j]] 11 A[i, j] = [(X[i], Y[j])] ++ A[i – 1, j – 1] 12 if D[i, j] > D[i – 1, j] + δ 13 then D[i, j] = D[i – 1, j] + δ 14 A[i, j] = [(X[i], )] ++ A[i – 1, j] 15 if D[i, j] > D[i, j – 1] + δ 16 then D[i, j] = D[i, j – 1] + δ 17 A[i, j] = [( , Y[j])] ++ A[i, j – 1] 18 return A[m, n] 19
Sequence Alignment: The Algorithm SequenceAlignment(X, Y, µ , δ ) D[0, 0] = 0 1 A[0, 0] = [ ] 2 for i = 1 to m 3 do D[i, 0] = D[i – 1, 0] + δ 4 A[i, 0] = [(X[i], )] ++ A[i – 1, 0] 5 for j = 1 to n 6 do D[0, j] = D[0, j – 1] + δ 7 A[0, j] = [( , Y[j])] ++ A[0, j – 1] 8 for i = 1 to m 9 do for j = 1 to n 10 do D[i, j] = D[i – 1, j – 1] + µ [X[i], Y[j]] 11 A[i, j] = [(X[i], Y[j])] ++ A[i – 1, j – 1] 12 if D[i, j] > D[i – 1, j] + δ 13 then D[i, j] = D[i – 1, j] + δ 14 Running time: O(mn) A[i, j] = [(X[i], )] ++ A[i – 1, j] 15 if D[i, j] > D[i, j – 1] + δ 16 then D[i, j] = D[i, j – 1] + δ 17 A[i, j] = [( , Y[j])] ++ A[i, j – 1] 18 return A[m, n] 19
Sequence Alignment: The Algorithm SequenceAlignment(X, Y, µ , δ ) D[0, 0] = 0 1 A[0, 0] = [ ] 2 for i = 1 to m 3 do D[i, 0] = D[i – 1, 0] + δ 4 A[i, 0] = [(X[i], )] ++ A[i – 1, 0] 5 for j = 1 to n 6 do D[0, j] = D[0, j – 1] + δ 7 A[0, j] = [( , Y[j])] ++ A[0, j – 1] 8 for i = 1 to m 9 do for j = 1 to n 10 do D[i, j] = D[i – 1, j – 1] + µ [X[i], Y[j]] 11 A[i, j] = [(X[i], Y[j])] ++ A[i – 1, j – 1] 12 if D[i, j] > D[i – 1, j] + δ 13 then D[i, j] = D[i – 1, j] + δ 14 Running time: O(mn) A[i, j] = [(X[i], )] ++ A[i – 1, j] 15 if D[i, j] > D[i, j – 1] + δ 16 Again, the sequence alignment is then D[i, j] = D[i, j – 1] + δ 17 reported back-to-front and can be A[i, j] = [( , Y[j])] ++ A[i, j – 1] 18 return A[m, n] reversed in O(m + n) time. 19
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element.
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er?
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case.
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case. Let x 1 < x 2 < · · · < x n be the elements to be stored in the tree. x 6 x 2 x 8 x 1 x 3 x 7 x 9 x 5 x 11 x 10 x 4 x 12 x 13
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case. Let x 1 < x 2 < · · · < x n be the elements to be stored in the tree. Let P = {p 1 , p 2 , . . . , p n } be the probabilities of searching for these elements. x 6 x 2 x 8 x 1 x 3 x 7 x 9 x 5 x 11 x 10 x 4 x 12 x 13
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case. Let x 1 < x 2 < · · · < x n be the elements to be stored in the tree. Let P = {p 1 , p 2 , . . . , p n } be the probabilities of searching for these elements. x 6 For a binary search tree T, let d T (x i ) denote the x 2 x 8 depth of element x i in T. x 1 x 3 x 7 x 9 x 5 x 11 x 10 x 4 x 12 x 13
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case. Let x 1 < x 2 < · · · < x n be the elements to be stored in the tree. Let P = {p 1 , p 2 , . . . , p n } be the probabilities of searching for these elements. x 6 For a binary search tree T, let d T (x i ) denote the x 2 x 8 depth of element x i in T. The cost of searching for element x i is in O(d T (x i )). x 1 x 3 x 7 x 9 x 5 x 11 x 10 x 4 x 12 x 13
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case. Let x 1 < x 2 < · · · < x n be the elements to be stored in the tree. Let P = {p 1 , p 2 , . . . , p n } be the probabilities of searching for these elements. x 6 For a binary search tree T, let d T (x i ) denote the x 2 x 8 depth of element x i in T. The cost of searching for element x i is in O(d T (x i )). x 1 x 3 x 7 x 9 The expected cost of a random query is in O(C P (T)), x 5 x 11 where n � C P (T) = p i d T (x i ). x 10 x 4 x 12 i=1 x 13
Optimal Binary Search Trees Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do be t er? Not in the worst case. Let x 1 < x 2 < · · · < x n be the elements to be stored in the tree. Let P = {p 1 , p 2 , . . . , p n } be the probabilities of searching for these elements. x 6 For a binary search tree T, let d T (x i ) denote the x 2 x 8 depth of element x i in T. The cost of searching for element x i is in O(d T (x i )). x 1 x 3 x 7 x 9 The expected cost of a random query is in O(C P (T)), x 5 x 11 where n � C P (T) = p i d T (x i ). x 10 x 4 x 12 i=1 x 13 An optimal binary search tree is a binary search tree T that minimizes C P (T).
Balancing Is Not Necessarily Optimal Assume n = 2 k – 1 and p i = 2 –i for all 1 ≤ i ≤ n – 1 and p n = 2 –n+1 .
Balancing Is Not Necessarily Optimal Assume n = 2 k – 1 and p i = 2 –i for all 1 ≤ i ≤ n – 1 and p n = 2 –n+1 . Balanced tree:
Balancing Is Not Necessarily Optimal Assume n = 2 k – 1 and p i = 2 –i for all 1 ≤ i ≤ n – 1 and p n = 2 –n+1 . Balanced tree: x 1 is at depth lg n. ⇒ Expected cost ≥ lg n 2 .
Balancing Is Not Necessarily Optimal Assume n = 2 k – 1 and p i = 2 –i for all 1 ≤ i ≤ n – 1 and p n = 2 –n+1 . Balanced tree: Long path: x 1 is at depth lg n. ⇒ Expected cost ≥ lg n 2 .
Balancing Is Not Necessarily Optimal Assume n = 2 k – 1 and p i = 2 –i for all 1 ≤ i ≤ n – 1 and p n = 2 –n+1 . Balanced tree: Long path: x 1 is at depth lg n. Depth of x i is i. ⇒ Expected cost ≥ lg n 2 .
Balancing Is Not Necessarily Optimal Assume n = 2 k – 1 and p i = 2 –i for all 1 ≤ i ≤ n – 1 and p n = 2 –n+1 . Balanced tree: Long path: x 1 is at depth lg n. Depth of x i is i. ⇒ Expected cost ≥ lg n ⇒ Expected cost 2 . n ∞ 2 i + n i 2 i + n i � � = 2 n < 2 n i=1 i=1 (1 – 1/2) 2 + n 1/2 2 n = 2 + n = 2 n < 3
Optimal Binary Search Trees: Problem Analysis The structure of a binary search tree: Assume we want to store elements x ℓ , x ℓ +1 , . . . , x r .
Optimal Binary Search Trees: Problem Analysis The structure of a binary search tree: Assume we want to store elements x ℓ , x ℓ +1 , . . . , x r . x m T ℓ T r x ℓ , x ℓ +1 , . . . , x m–1 x m+1 , x m+2 , . . . , x r
Optimal Binary Search Trees: Problem Analysis The structure of a binary search tree: Assume we want to store elements x ℓ , x ℓ +1 , . . . , x r . x m T ℓ T r x ℓ , x ℓ +1 , . . . , x m–1 x m+1 , x m+2 , . . . , x r Let p i,j = � j h=i p h . C P (T) = p ℓ ,r + C P (T ℓ ) + C P (T r )
Optimal Binary Search Trees: Problem Analysis The structure of a binary search tree: Assume we want to store elements x ℓ , x ℓ +1 , . . . , x r . x m T ℓ T r x ℓ , x ℓ +1 , . . . , x m–1 x m+1 , x m+2 , . . . , x r Let p i,j = � j h=i p h . C P (T) = p ℓ ,r + C P (T ℓ ) + C P (T r ) ⇒ T ℓ and T r are optimal search trees for x ℓ , x ℓ +1 , . . . , x m–1 and x m+1 , x m+2 , . . . , x r , respectively.
Optimal Binary Search Trees: Problem Analysis The structure of a binary search tree: Assume we want to store elements x ℓ , x ℓ +1 , . . . , x r . x m T ℓ T r x ℓ , x ℓ +1 , . . . , x m–1 x m+1 , x m+2 , . . . , x r Let p i,j = � j h=i p h . C P (T) = p ℓ ,r + C P (T ℓ ) + C P (T r ) ⇒ T ℓ and T r are optimal search trees for x ℓ , x ℓ +1 , . . . , x m–1 and x m+1 , x m+2 , . . . , x r , respectively. We need to figure out which element to store at the root!
Optimal Binary Search Trees: The Recurrence Let C( ℓ , r) be the cost of an optimal binary search tree for x ℓ , x ℓ +1 , . . . , x r . We are interested in C(1, n).
Optimal Binary Search Trees: The Recurrence Let C( ℓ , r) be the cost of an optimal binary search tree for x ℓ , x ℓ +1 , . . . , x r . We are interested in C(1, n). � 0 r < ℓ C( ℓ , r) = p ℓ ,r + min ℓ ≤ m ≤ r (C ℓ ,m–1 + C m+1,r ) otherwise
Optimal Binary Search Trees: The Algorithm OptimalBinarySearchTree(X, P) for i = 1 to n 1 do P ′ [i, i] = P[i] 2 for j = i + 1 to n 3 do P ′ [i, j] = P ′ [i, j – 1] + P[j] 4 for i = 1 to n + 1 5 do C[i, i – 1] = 0 6 T[i, i – 1] = ∅ 7 for ℓ = 0 to n – 1 8 do for i = 1 to n – ℓ 9 do C[i, i + ℓ ] = ∞ 10 for j = i to i + ℓ 11 do if C[i, i + ℓ ] > C[i, j – 1] + C[j + 1, i + ℓ ] 12 then C[i, i + ℓ ] = C[i, j – 1] + C[j + 1, i + ℓ ] 13 T[i, i + ℓ ] = new node storing X[j] 14 T[i, i + ℓ ].left = T[i, j – 1] 15 T[i, i + ℓ ].right = T[j + 1, i + ℓ ] 16 C[i, i + ℓ ] = C[i, i + ℓ ] + P ′ [i, i + ℓ ] 17 return T[1, n] 18
Optimal Binary Search Trees: The Algorithm OptimalBinarySearchTree(X, P) for i = 1 to n 1 do P ′ [i, i] = P[i] 2 for j = i + 1 to n 3 do P ′ [i, j] = P ′ [i, j – 1] + P[j] 4 for i = 1 to n + 1 5 do C[i, i – 1] = 0 6 T[i, i – 1] = ∅ 7 for ℓ = 0 to n – 1 8 do for i = 1 to n – ℓ 9 do C[i, i + ℓ ] = ∞ 10 for j = i to i + ℓ 11 do if C[i, i + ℓ ] > C[i, j – 1] + C[j + 1, i + ℓ ] 12 then C[i, i + ℓ ] = C[i, j – 1] + C[j + 1, i + ℓ ] 13 T[i, i + ℓ ] = new node storing X[j] 14 T[i, i + ℓ ].left = T[i, j – 1] 15 T[i, i + ℓ ].right = T[j + 1, i + ℓ ] 16 C[i, i + ℓ ] = C[i, i + ℓ ] + P ′ [i, i + ℓ ] 17 return T[1, n] 18 Lemma: An optimal binary search tree for n elements can be computed in O(n 3 ) time.
Single-Source Shortest Paths Dijkstra’s algorithm may fail in the presence of negative-weight edges: 7 7 7 –3 7 –3 0 6 0 4 2 4 2 4 2 2 Dijkstra Correct
Single-Source Shortest Paths Dijkstra’s algorithm may fail in the presence of negative-weight edges: 7 7 7 –3 7 –3 0 6 0 4 2 4 2 4 2 2 Dijkstra Correct We need an algorithm that can deal with negative-length edges.
Single-Source Shortest Paths: Problem Analysis Lemma: If P = � u 0 , v 1 , . . . , u k � is a shortest path from u 0 = s to u k = v, then P ′ = (u 0 , u 1 , . . . , u k–1 ) is a shortest path from u 0 to u k–1 . s = u 0 P u k–1 v = u k
Single-Source Shortest Paths: Problem Analysis Lemma: If P = � u 0 , v 1 , . . . , u k � is a shortest path from u 0 = s to u k = v, then P ′ = (u 0 , u 1 , . . . , u k–1 ) is a shortest path from u 0 to u k–1 . s = u 0 P u k–1 v = u k Shortest path from u 0 to u k–1
Single-Source Shortest Paths: Problem Analysis Lemma: If P = � u 0 , v 1 , . . . , u k � is a shortest path from u 0 = s to u k = v, then P ′ = (u 0 , u 1 , . . . , u k–1 ) is a shortest path from u 0 to u k–1 . s = u 0 P u k–1 v = u k Shortest path from u 0 to u k–1 Observation: P ′ has one less edge than P.
Single-Source Shortest Paths: The Recurrence Let d i (s, v) be the length of the shortest path P i (s, v) from s to v that has at most i edges.
Single-Source Shortest Paths: The Recurrence Let d i (s, v) be the length of the shortest path P i (s, v) from s to v that has at most i edges. d i (s, v) = ∞ if there is no path with at most i edges from s to v.
Single-Source Shortest Paths: The Recurrence Let d i (s, v) be the length of the shortest path P i (s, v) from s to v that has at most i edges. d i (s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = d n–1 (s, v)
Single-Source Shortest Paths: The Recurrence Let d i (s, v) be the length of the shortest path P i (s, v) from s to v that has at most i edges. d i (s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = d n–1 (s, v) Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: � 0 v = s d 0 (s, v) = ∞ otherwise
Single-Source Shortest Paths: The Recurrence Let d i (s, v) be the length of the shortest path P i (s, v) from s to v that has at most i edges. d i (s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = d n–1 (s, v) Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: � 0 v = s d 0 (s, v) = ∞ otherwise If i > 0, then
Single-Source Shortest Paths: The Recurrence Let d i (s, v) be the length of the shortest path P i (s, v) from s to v that has at most i edges. d i (s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = d n–1 (s, v) Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: � 0 v = s d 0 (s, v) = ∞ otherwise If i > 0, then • P i (s, v) has at most i – 1 edges or
Recommend
More recommend