comp 3403 algorithm analysis part 2 chapters 4 5
play

COMP 3403 Algorithm Analysis Part 2 Chapters 4 5 Jim Diamond - PowerPoint PPT Presentation

COMP 3403 Algorithm Analysis Part 2 Chapters 4 5 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University Chapter 4 Decrease and Conquer Jim Diamond, Jodrey School of Computer Science, Acadia University Chapter 4


  1. COMP 3403 — Algorithm Analysis Part 2 — Chapters 4 – 5 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University

  2. Chapter 4 Decrease and Conquer Jim Diamond, Jodrey School of Computer Science, Acadia University

  3. Chapter 4 43 Chapter 4: Decrease and Conquer • Idea: (a) reduce instance to one smaller instance of the same problem (b) solve the smaller instance (c) • We consider three categories of decrease and conquer: – decrease by a constant (which is usually 1) – e.g., insertion sort, DFS, BFS – – e.g., binary search, exponentiation by squaring – decrease by a variable amount – Jim Diamond, Jodrey School of Computer Science, Acadia University

  4. Chapter 4 44 Example: Exponentiation Consider the problem of computing a n • Brute force: iteratively do n − 1 multiplications: a n = a · a · a · · · a · a • Divide and conquer: a n = a ⌈ n/ 2 ⌉ ∗ a ⌊ n/ 2 ⌋ • ( n > 1) Decrease by a constant: a n = a ∗ a n − 1 • • Decrease by a constant factor:  a n/ 2 � 2 � if n is an even number > 0     a n =  a ( n − 1) / 2 � 2 � · a if n is an odd number > 1    a if n = 1   • Which, if any, of these have the same efficiency? • Which, if any, is(are) the most efficient? Jim Diamond, Jodrey School of Computer Science, Acadia University

  5. Chapter 4 45 Decrease by One: Insertion Sort Algorithm Insertion Sort: InsertionSort(A[0..n-1]) for i = 1 to n - 1 v = A[i] j = i - 1 while j >= 0 and A[j] > v A[j + 1] = A[j] j = j - 1 A[j + 1] = v • Decrease by one: when sorting A [0 ..k + 1] , you make use of the fact that A [0 ..k ] is already sorted • Quick and dirty analysis: there are loops nested to a depth of 2, each of which have O ( n ) iterations, so you might expect an overall complexity of O ( n 2 ) Jim Diamond, Jodrey School of Computer Science, Acadia University

  6. Chapter 4 46 Insertion Sort: 2 • E.g., after k iterations of the outer loop, we have a situation like this: 2 5 13 | 8 3 34 21 That is, the numbers to the left of “ | ” are sorted, and the number in bold is the next number to be inserted in the sorted list • Similar to selection sort, but: – – 1 in IS, 0 in SS – finding the next element to insert into the sorted list: – – finding the location to insert the next element into IS uses O ( k ) comparisons, SS uses 0 – – moves to insert the element into the sorted list IS uses O ( k ) data moves, SS uses O (1) – Jim Diamond, Jodrey School of Computer Science, Acadia University

  7. Chapter 4 47 Insertion Sort: 3 Concern: IS uses more data moves ( O ( n 2 ) ) than SS ( O ( n ) ) • • But: the number of comparisons + data moves in the inner loop of IS is data dependent – SS must examine all unsorted elements to find the minimum remaining value • Thus, for IS, the average case behavior may be (and is!) better than its worst case – • Consider the case of random data with no duplicate elements – on average, the new element will be inserted halfway down the currently-sorted list – this cuts down the number of comparisons and the number of data moves by 1/2 T avg ( n ) ≈ n 2 / 4 = Θ( n 2 ) for IS – – Jim Diamond, Jodrey School of Computer Science, Acadia University

  8. Chapter 4 48 Insertion Sort: 4 • An interesting case to consider is that of “almost sorted” data here IS really improves upon SS (and QS and MS and . . . ): – T best ( n ) = n − 1 !! • Final thought: the book comments on how, by using a sentinel, we could write while A[j] > v A[j + 1] = A[j] j = j - 1 while j >= 0 and A[j] > v ... j = j - 1 instead of • Question for those of you who recall their computer architecture: Is that a bogus comment? – – why or why not? Jim Diamond, Jodrey School of Computer Science, Acadia University

  9. Chapter 4 49 Binary Insertion Sort • Observe that IS inserts an element into a sorted array • We could find the insertion location by using binary search – • Issue: – in general, binary search is better than linear search to find an element’s place in a sorted array – but in the case of sorted or almost-sorted arrays binary search turns out to be worse (why?) • GEQ: what are the worst, average and best-case complexities for binary insertion sort? Jim Diamond, Jodrey School of Computer Science, Acadia University

  10. Chapter 4 50 Graphs: Review • A graph G = ( V, E ) is a (finite) set of vertices V and a set of edges E , � { v 1 , v 2 } | v 1 ∈ V, v 2 ∈ V � where E ⊆ (in simple graphs v 1 � = v 2 ) it is common to write n for the number of vertices and m for the – number of edges if { v 1 , v 2 } ∈ E , we say v 1 is adjacent to v 2 – • Normally when we say “graph”, we mean “simple graph” – some people study multi-graphs which allow multiple edges between pairs of vertices – e.g., here is a graph with multiple edges and loops 1 2 3 – – in this class we will only be using simple graphs Jim Diamond, Jodrey School of Computer Science, Acadia University

  11. Chapter 4 51 Graph Representation for Algorithms • In order to use graphs to solve problems with computer algorithms, we must be able to represent graphs in our programs – – adjacency matrix – adjacency list • An adjacency matrix (for some n -vertex graph G ) is an n × n matrix A , where A i,j is 1 iff v i is adjacent to v j ; otherwise A i,j is 0 • An adjacency list (for some n -vertex graph G ) is an array of n lists (one for each vertex), such that v i is in v j ’s list iff v i is adjacent to v j • Most graph algorithms favour the adjacency list approach, since the size of that representation is linear in the size of the graph: Θ( | V | + | E | ) – – in other words, with an adjacency matrix representation, there is no hope of having an algorithm (for non-trivial problems) guaranteed to run in O ( | G | ) time Jim Diamond, Jodrey School of Computer Science, Acadia University

  12. Chapter 4 52 Graph Searching: Introduction • A common operation on graphs is to start searching at one vertex until either – – all vertices have been visited – (there are other possibilities, but these are the usual ideas) • The search proceeds by searching from an already-visited vertex v to some vertex w which is adjacent to v if there are multiple possible choices for w , the particular graph – search may dictate which one(s) are valid choices Jim Diamond, Jodrey School of Computer Science, Acadia University

  13. Chapter 4 53 Graph Searching: DFS and BFS • Two important graph search techniques are known as – – breadth-first search (BFS) • Both techniques can be considered as “decrease by one” e.g., starting from one of the n vertices in a graph, we do a search – of a (sub-)graph consisting of n − 1 vertices • Both DFS and BFS are linear in the size of the graph representation – Jim Diamond, Jodrey School of Computer Science, Acadia University

  14. Chapter 4 54 DFS vs. BFS • The two algorithms are quite similar – – BFS uses a queue to keep track of the vertices to be visited next • Since recursion automagically implements a stack, it can be easier to write a DF search than a BF search • HOMEWORK!! Review textbook algorithms – – instead, it does the initialization and then calls a function which can be recursive – note also the problems which the textbook says these algs solve • Many important graph algorithms are based on these, particularly DFS – e.g., finding the bi-connected components of a graph Jim Diamond, Jodrey School of Computer Science, Acadia University

  15. Chapter 4 55 Directed Graphs (Digraphs) • A directed graph G = ( V, E ) is a (finite) set of vertices V and a set of edges E , where E ⊆ { ( v 1 , v 2 ) | v 1 ∈ V, v 2 ∈ V } – we say ( v 1 , v 2 ) is an edge from v 1 to v 2 – • A directed cycle in a digraph is a (finite) sequence of vertices v 1 , v 2 , . . . , v k such that – ( v i , v i +1 ) ∈ E for 1 ≤ i < k , and – ( v k , v 1 ) ∈ E – – • A digraph with no directed cycle is a directed acyclic graph ( dag ) • Algorithms on directed graphs are sometimes more complex than on undirected graphs – Jim Diamond, Jodrey School of Computer Science, Acadia University

  16. Chapter 4 56 Dags In Real Life • Dags can be used to represent situations in which there is an ordering or precedence among some items • Examples: – – manufacturing: component A must be completed before B can be started, but components C and D can be completed in either order, or in parallel – car example: the engine can be completed at the same time the body is constructed, but the engine should be installed in the car before the hood is attached – – the foundation must be constructed before the walls are erected – the wiring and plumbing must be put in before the walls are finished, but the wiring and the plumbing can be installed in either order Jim Diamond, Jodrey School of Computer Science, Acadia University

  17. Chapter 4 57 Topological Sorting: DFS Approach • A topological sorting of a digraph ( V, E ) with n vertices is an ordering v i 1 , v i 2 , . . . , v i n of the vertices such that for all ( v j , v k ) ∈ E , v j appears before v k in this ordering – – it is also true (if less clear) that every dag has a topological sorting • DFS topological sort (start at any vertex with in-degree 0): Jim Diamond, Jodrey School of Computer Science, Acadia University

Recommend


More recommend