CS-270 Algorithms 2013/14 Solution to Coursework I Oliver Kullmann Swansea University Computer Science January 8, 2014 1 Rules Formulate, in your own words (really important) a kind of “framework” for working out Θ-expressions: 1. These rules should be tailored to the task at hand: not a general theory, but some- thing workable which suffices for the task (Section 2). 2. Try to be as precise as possible. In Section 2, you need to state which rules you applied, and the application should be always obvious. 3. Give examples for each rule. 4. You need to justify these rules, but it definitely doesn’t need to be a mathematical explanation: you might cite the book, or even some Internet sources might be used (only if they are really authoritative — sorry, but other students don’t count here). Here are basic rules with examples; if you have problems reading such language, then at least make sure that you can handle the simplifications as in Section 2. 1 1.1 Simplification First two rules for removing some parts of a term. We can drop constant factors: α · f ( n ) = Θ( f ( n )) (1) for a constant number α > 0. So for example 5 n 2 = Θ( n 2 ). We can also drop smaller addends: f ( n ) + g ( n ) = Θ( g ( n )) (2) for f ( n ) = O ( g ( n )). So for example we have n = O ( n 2 ), and thus n + n 2 = Θ( n 2 ). 1 It might be that reflecting on all that gives you headaches — in that case don’t worry too much about it, and concentrate on practical handling of Θ and Ω. 1
1.2 O -Relations Now come four rules for obtaining O -relations (needed for Rule (2)). First the trivial rule, which is applied without mentioning: if there is some integer n 0 , such that for all n ≥ n 0 holds f ( n ) ≤ g ( n ), then we have f ( n ) = O ( g ( n )). More substantial is that all logarithms are Θ-equal, and asymptotically smaller than any power: lg( n ) = O ( n α ) log a ( n ) = Θ(lg( n )) , (3) for all constant numbers a > 1 and α > 0. So for example log 3 ( n ) = Θ(lg n ) and log 10 ( n ) = O ( √ n ). Powers are simply ordered by the exponents: n α ≤ n β (4) for constant numbers 0 < α ≤ β ; for example n 5 ≤ n 6 . Finally, ordering exponentials is also easy — they are asymptotically bigger than any power, and themselves ordered by the base: n α = O ( a n ) , a n ≤ b n (5) for all constant numbers α > 0 and 1 < a ≤ b . So for example n (10 1000000 ) = O (1 . 0000001 n ) and 3 n ≤ 4 n . 1.3 Structural rules These structural rules are very basic, and thus they are harder to understand. If you don’t understand them, chances are that you apply them nevertheless. Let’s hope the best, and you might skip this subsection. We need a general structural rule for replacement of Θ-equal terms in sums: f ( n ) = Θ( g ( n )) = ⇒ f ( n ) + h ( n ) = Θ( g ( n ) + h ( n )) . (6) For example, considering 5 n + n 2 , we set f ( n ) := 5 n , g ( n ) := n , and h ( n ) = n 2 . We know 5 n = Θ( n ), and thus 5 n + n 2 = Θ( n + n 2 ). Another basic structural rule is transitivity of Θ-equality: f ( n ) = Θ( g ( n )) and g ( n ) = Θ( h ( n )) = ⇒ f ( n ) = Θ( h ( n )) . (7) For example, we have already derived 5 n + n 2 = Θ( n + n 2 ), and we also know n + n 2 = Θ( n 2 ), and thus 5 n + n 2 = Θ( n 2 ). Finally there is the trivial rule (again applied without mentioning), that if there is an integer n 0 , such that for all n ≥ n 0 holds f ( n ) = g ( n ), then we have f ( n ) = Θ( g ( n )). For example ( n + 3) 2 = n 2 + 6 n + 9, and thus ( n + 2) 2 = Θ( n 2 + 6 n + 9). 1.4 Subtraction To fully handle the examples in Section 2, we also need to do something about subtraction. This is now a bit subtle: • If we have a sum f ( n ) + g ( n ), then Rule (6) says that we can replace g ( n ) by h ( n ), if g ( n ) = Θ( h ( n )) holds, and we get f ( n ) + g ( n ) = Θ( f ( n ) + h ( n )). 2
• However for a difference f ( n ) − g ( n ) we have to be more careful, and replacing g ( n ) by h ( n ) in case of g ( n ) = Θ( h ( n )) is no longer possible: 1. n 2 − n 2 = 0 2. n 2 − 1 / 2 n 2 = 1 / 2 n 2 . • The point is, that in a difference f ( n ) − g ( n ) we can drop g ( n ) if g ( n ) is asymptotically “strictly smaller” than f ( n ). Instead of formulating this in general, we only formulate the rule we need: n α − γ · n β = Θ( n α ) (8) for 0 ≤ β < α and arbitrary γ . 1.5 Justifications The range of acceptable justifications is rather large, and since I do not want to digress here into mathematical argumentation, only some incomplete arguments are given, which are good to know in any case. In the context of this module, don’t worry too much about such justifications, but have a “critical consciousness”, that is, know when you are applying rules, and try to be as precise as possible with them. 1. Rule (4), that is, for all natural numbers n ≥ 0 and all real numbers 0 < α ≤ β holds n α ≤ n β , is a basic mathematical fact, which should be quite intuitive. 2 2. The part of Rule (5), which states a n ≤ b n , is another basic mathematical fact, which again should be quite intuitive. 3. That all logarithms are Θ-equal (the first part of Rule (3)) follows by log a ( x ) = log b ( x ) log b ( a ) . For example log 8 ( x ) = lg( x ) / 3 (think about it — that’s not too hard). 4. That lg( n ) grows slower than n should be rather “obvious”. That lg( n ) grows slower than e.g. √ n is just one step further. So also the second part of Rule (3) should “feel alright”. 5. Comparing a power with an exponential: A general proof (from scratch) is not com- pletely trivial. Let’s consider the inequality n a ≤ 2 n for n ≥ 2 and arbitrary a > 0 (note that writing down such an inequality doesn’t mean it’s true — in general we are looking for those n making it true): n n a ≤ 2 n ⇐ ⇒ lg( n a ) ≤ lg(2 n ) ⇐ ⇒ a lg( n ) ≤ n ⇐ lg( n ) ≥ a. ⇒ n If we accept that lg( n ) grows slower than n , then for n sufficiently large lg( n ) ≥ a holds, that is, n a ≤ 2 n holds for n sufficiently large. And thus n a = O (2 n ). 2 However also notice that for real numbers 0 < x < 1 and for 0 < α < β ≤ 1 we have x α > x β . 3
2 Θ -simplifications Give Θ-expressions which are as simple as possible, and state the rules (your rules!) you applied: 5 n 3 − 8 n + 11 n 4 = Θ(?) 2 n + 3 n + n 500 = Θ(?) √ n + log( n ) = Θ(?) n · ( n + 1) · ( n + 2) = Θ(?) The results are: 5 n 3 − 8 n + 11 n 4 Θ( n 4 ) = 2 n + 3 n + n 500 Θ(3 n ) = √ n + log( n ) Θ( √ n ) = Θ( n 3 ) . n · ( n + 1) · ( n + 2) = The rules used are: 1. (1), (2), (4), (6), (7), (8). 2. (2), (5), (6), (7). 3. (2), (3), (6), (7). 4. (1), (2), (4), (6), (7). 3 Θ -ordering Sort the ten expressions into asymptotic ascending order (i.e., if f ( x ) is left of g ( x ), then you need to have f ( x ) = O ( g ( x )): lg n, √ n − lg( n ) , 2 n +1 , √ n + n, 3 . 4 n, 4 n , n n , n 2 + n 3 , 2 n , n + n 2 . You need also to give some justification (could be a mathematical proof, but doesn’t need to be) for each relation in this list, in the form of “ f ( x ) = O ( g ( x )) because of ...”. In some cases we even have a relation f ( x ) = Θ( g ( x )), which needs to be stated and argued for. The sorted order is, with Θ-equalities: lg n, √ n − lg( n ) , √ n + n, 3 . 4 n, n + n 2 , n 2 + n 3 , 2 n , 2 n +1 , 4 n , n n . 4
To justify this, best to first determine the Θ-”normalform”: lg n = Θ(lg n ) √ n − lg( n ) = Θ( √ n ) ((3), (8)) √ n + n = Θ( n ) ((2), (3)) 3 . 4 n = Θ( n ) ((1)) n + n 2 = Θ( n 2 ) ((2), (4)) n 2 + n 3 = Θ( n 3 ) ((2), (4)) 2 n = Θ(2 n ) 2 n +1 = Θ(2 n ) ((1)) 4 n = Θ(4 n ) n n = Θ( n n ) . In this way we see the two Θ-equalities. And that the order of the ten terms is valid we can see by our rules, or, for example, by noticing that for all n ≥ 10 holds lg n ≤ √ n ≤ n ≤ n 2 ≤ n 3 ≤ 2 n ≤ 4 n ≤ n n . 4 Recurrences State in your own words a methodology for solving recurrences as considered by us: 1. Here the source shall be just the lecture notes (nothing else). 2. Say clearly, what is given (the “problem”), and how to solve it, as a kind of (little) tutorial. 3. Give two examples for each case. In Wikipedia we find “recurrences” under Recurrence relation; only the Fibonacci numbers there and Computer Science there is relevant to us (but you don’t need to worry about that). For us, recurrences come from Divide-and-Conquer algorithms, and express the running time T ( n ) in a form T ( n ) = a · T ( n b ) + Θ( n c ) , (9) where a, b, c can be real numbers, with a ≥ 1, b > 1 (otherwise no getting smaller!), and c ≥ 0. The underlying algorithm shall not concern us here. And the recurrence is usually just read off from the algorithm: • a is the number of sub-cases to be handled (and so typically a is a natural number); if there is “no a ”, then a = 1. • b is the factor by which the instance is smaller (and so typically also b is a natural number). 5
Recommend
More recommend