The contrapositive is “if x ≥ 0 , then x 3 ≥ 0 .” Proof. If x = 0 , then trivially, x 3 = 0 ≥ 0 . x 2 > 0 x > 0 ⇒
The contrapositive is “if x ≥ 0 , then x 3 ≥ 0 .” Proof. If x = 0 , then trivially, x 3 = 0 ≥ 0 . x 2 > 0 x > 0 ⇒ x 3 ≥ 0 ⇒
To prove a statement p is true, you may assume that it is false and then proceed to show that such an assumption leads to a contradiction with a known result. In terms of logic, you show that for a known result r , ¬ p → ( r ∧ ¬ r ) is true, which leads to a contradiction since ( r ∧ ¬ r ) cannot hold. Example √ 2 is an irrational number.
Proof. √ Let p be the proposition “ 2 is irrational.” We start by assuming ¬ p , and show that it will lead to a contradiction. √ √ 2 = a 2 is rational ⇒ b , a, b ∈ R and have no common factor (proposition r ). Squarring that equation: 2 = a 2 b 2 . Thus 2 b 2 = a 2 , which implies that a 2 is even. a 2 is even ⇒ a is even ⇒ a = 2 c . Thus, 2 b 2 = 4 c 2 ⇒ b 2 is even ⇒ b is even. Thus, a and b have a common factor 2 (i.e., proposistion ¬ r ). ¬ p → r ∧ ¬ r , which is a contridiction. √ Thus, ¬ p is false, so that 2 is irrational.
A constructive existence proof asserts a theorem by providing a specific, concrete example of a statement. Such a proof only proves a statement of the form ∃ xP ( x ) for some predicate P . It does not prove the statement for all such x . A nonconstructive existence proof also shows a statement of the form ∃ xP ( x ) , but it does not necessarily need to give a specific example x . Such a proof usually proceeds by contradiction—assume that ¬∃ xP ( x ) ≡ ∀ x ¬ P ( x ) holds and then get a contradiction.
Theorem (Principle of Mathematical Induction) Given a statement P concerning the integer n , suppose 1 P is true for some particular integer n 0 ; P ( n 0 ) = 1 . 2 If P is true for some particular integer k ≥ n 0 then it is true for k + 1 . Then P is true for all integers n ≥ n 0 ; i.e. ∀ n ≥ n 0 P ( n ) is true.
Showing that P ( n 0 ) holds for some initial integer n 0 is called the Basis Step . The statement P ( n 0 ) itself is called the inductive hypothesis . Showing the implication P ( k ) → P ( k + 1) for every k ≥ n 0 is called the Induction Step . Together, induction can be expressed as an inference rule. � � P ( n 0 ) ∧ ∀ k ≥ n 0 P ( k ) → P ( k + 1) → ∀ n ≥ n 0 P ( n )
Recall that we are really only interested in the Order of Growth of an algorithm’s complexity. How well does the algorithm perform as the input size grows; n → ∞ We have seen how to mathematically evaluate the cost functions of algorithms with respect to their input size n and their elementary operation. However, it suffices to simply measure a cost function’s asymptotic behavior.
f ( n ) f ( n ) = log ( n ) 20 f ( n ) = x f ( n ) = n log ( n ) 15 f ( n ) = n 2 f ( n ) = n 3 f ( n ) = 2 n 10 f ( n ) = n ! 5 n 0 5 10 15 20
In practice, specific hardware, implementation, languages, etc. will greatly affect how the algorithm behaves. However, we want to study and analyze algorithms in and of themselves , independent of such factors. For example, an algorithm that executes its elementary operation 10 n times is better than one which executes it . 005 n 2 times. Moreover, algorithms that have running times n 2 and 2000 n 2 are considered to be asymptotically equivalent .
Definition Let f and g be two functions f, g : N → R + . We say that f ( n ) ∈ O ( g ( n )) (read: f is Big-“O” of g ) if there exists a constant c ∈ R + and n 0 ∈ N such that for every integer n ≥ n 0 , f ( n ) ≤ cg ( n ) Big-O is actually Omicron, but it suffices to write “O” Intuition: f is ( asymptotically ) less than or equal to g Big-O gives an asymptotic upper bound
Definition Let f and g be two functions f, g : N → R + . We say that f ( n ) ∈ Ω( g ( n )) (read: f is Big-Omega of g ) if there exist c ∈ R + and n 0 ∈ N such that for every integer n ≥ n 0 , f ( n ) ≥ cg ( n ) Intuition: f is ( asymptotically ) greater than or equal to g . Big-Omega gives an asymptotic lower bound .
Definition Let f and g be two functions f, g : N → R + . We say that f ( n ) ∈ Θ( g ( n )) (read: f is Big-Theta of g ) if there exist constants c 1 , c 2 ∈ R + and n 0 ∈ N such that for every integer n ≥ n 0 , c 1 g ( n ) ≤ f ( n ) ≤ c 2 g ( n ) Intuition: f is ( asymptotically ) equal to g . f is bounded above and below by g . Big-Theta gives an asymptotic equivalence .
Theorem For f 1 ( n ) ∈ O ( g 1 ( n )) and f 2 ∈ O ( g 2 ( n )) , f 1 ( n ) + f 2 ( n ) ∈ O (max { g 1 ( n ) , g 2 ( n ) } ) This property implies that we can ignore lower order terms. In particular, for any polynomial p ( n ) with degree k , p ( n ) ∈ O ( n k ) . 1 In addition, this gives us justification for ignoring constant coefficients. That is, for any function f ( n ) and positive constant c , cf ( n ) ∈ Θ( f ( n ))
Some obvious properties also follow from the definition. Corollary For positive functions, f ( n ) and g ( n ) the following hold: f ( n ) ∈ Θ( g ( n )) ⇐ ⇒ f ( n ) ∈ O ( g ( n )) and f ( n ) ∈ Ω( g ( n )) f ( n ) ∈ O ( g ( n )) ⇐ ⇒ g ( n ) ∈ Ω( f ( n )) The proof is left as an exercise. 1 More accurately, p ( n ) ∈ Θ( n k )
Proving an asymptotic relationship between two given functions f ( n ) and g ( n ) can be done intuitively for most of the functions you will encounter; all polynomials for example. However, this does not suffice as a formal proof. To prove a relationship of the form f ( n ) ∈ ∆( g ( n )) where ∆ is one of O , Ω , or Θ , can be done simply using the definitions, that is: find a value for c (or c 1 and c 2 ). find a value for n 0 . (But this is not the only way.)
Constant O (1) Logarithmic O (log ( n )) Linear O ( n ) O (log k ( n )) Polylogarithmic O ( n 2 ) Quadratic O ( n 3 ) Cubic O ( n k ) for any k > 0 Polynomial O (2 n ) Exponential O (2 f ( n ) ) for f ( n ) = n (1+ ǫ ) , ǫ > 0 Super-Exponential For example, n ! Table: Some Efficiency Classes
Definition The objects in a set are called elements or members of a set. A set is said to contain its elements. Recall the notation: for a set A , an element x we write x ∈ A if A contains x and x �∈ A otherwise. Latex notation: \ in, \ neg \ in.
We’ve already seen set builder notation: O = { x | ( x ∈ Z ) ∧ ( x = 2 k for some k ∈ Z ) } should be read O is the set that contains all x such that x is an integer and x is even. A set is defined in intension , when you give its set builder notation. O = { x | ( x ∈ Z ) ∧ ( x ≤ 8) } A set is defined in extension , when you enumerate all the elements. O = { 0 , 2 , 6 , 8 }
Definition The power set of a set S , denoted P ( S ) is the set of all subsets of S . Example Let A = { a, b, c } then the power set is P ( S ) = {∅ , { a } , { b } , { c } , { a, b } , { a, c } , { b, c } , { a, b, c }} Note that the empty set and the set itself are always elements of the power set. This follows from Theorem 1 (Rosen, p81).
The power set is a fundamental combinatorial object useful when considering all possible combinations of elements of a set. Fact Let S be a set such that | S | = n , then |P ( S ) | = 2 n
Definition Let A and B be sets. The Cartesian product of A and B denoted A × B , is the set of all ordered pairs ( a, b ) where a ∈ A and b ∈ B : A × B = { ( a, b ) | ( a ∈ A ) ∧ ( b ∈ B ) } The Cartesian product is also known as the cross product . Definition A subset of a Cartesian product, R ⊆ A × B is called a relation . We will talk more about relations in the next set of slides. Note that A × B � = B × A unless A = ∅ or B = ∅ or A = B . Can you find a counter example to prove this?
Cartesian products can be generalized for any n -tuple. Definition The Cartesian product of n sets, A 1 , A 2 , . . . , A n , denoted A 1 × A 2 × · · · × A n is A 1 × A 2 ×· · ·× A n = { ( a 1 , a 2 , . . . , a n ) | a i ∈ A i for i = 1 , 2 , . . . , n }
An alternative prove uses membership tables where an entry is 1 if it a chosen (but fixed) element is in the set and 0 otherwise. Example (Exercise 13, p95): Show that A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 1 1 1 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0 1 1 1 0 0 1 1 1 1 1 0 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 0 1 1 0 1 1 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 1 0 1 1 1 1 0 0 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 1 1 1 0 0 1 0 1 1 1 0 1 0 0 1 1 0 0 1 1 0 1 1 0 1 0 0 0 1 0 1 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 1 1 0 0 0 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 1 1 1 1 0 0 1 0 1 1 1 0 0 1 0 0 1 1 0 1 0 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1 0 1 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 1 1 0 0 0 0 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A ∩ B ∩ C A ∩ B ∩ C A B C A B C C 0 0 0 0 1 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 0 0 1 1 0 1 1 0 1 1 0 1 1 0 0 1 1 0 0 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 1 1 1 1 0 0 0 0 0 1 under a set indicates that an element is in the set. If the columns are equivalent, we can conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
¯ ¯ ¯ A ∪ ¯ ¯ B ∪ ¯ A B C A ∩ B ∩ C A ∩ B ∩ C A B C C 0 0 0 0 1 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 0 0 1 1 0 1 1 0 1 1 0 1 1 0 0 1 1 0 0 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 0 1 1 1 1 1 1 0 0 0 0 0 1 under a set indicates that an element is in the set. Since the columns are equivalent, we conclude that indeed, A ∩ B ∩ C = ¯ A ∪ ¯ B ∪ ¯ C
Definition A geometric progression is a sequence of the form a, ar, ar 2 , ar 3 , . . . , ar n , . . . Where a ∈ R is called the initial term and r ∈ R is the common ratio . A geometric progression is a discrete analogue of the exponential function f ( x ) = ar x
Definition An arithmetic progression is a sequence of the form a, a + d, a + 2 d, a + 3 d, . . . , a + nd, . . . Where a ∈ R is called the initial term and r ∈ R is the common difference . Again, an arithmetic progression is a discrete analogue of the linear function, f ( x ) = dx + a
You should be very familiar with Summation notation: n � a j = a m + a m +1 + · · · + a n − 1 + a n j = m Here, j is the index of summation , m is the lower limit , and n is the upper limit . Often times, it is useful to change the lower/upper limits; which can be done in a straightforward manner (though we must be careful). n n − 1 � � a j = a j +1 j =1 j =0
Sometimes we can express a summation in closed form . Geometric series, for example: Theorem For a, r ∈ R , r � = 0 , n � ar n +1 − a if r � = 1 ar i = � r − 1 ( n + 1) a if r = 1 i =0
When we take the sum of a sequence, we get a series . We’ve already seen a closed form for geometric series. Some other useful closed forms include the following. u � 1 = u − l + 1 , for l ≤ u i = l n n ( n + 1) � i = 2 i =0 n n ( n + 1)(2 n + 1) � i 2 = 6 i =0 n 1 � i k k + 1 n k +1 ≈ i =0
Though we will mostly deal with finite series (i.e. an upper limit of n for a fixed integer), infinite series are also useful. Example Consider the geometric series ∞ 2 n = 1 + 1 1 2 + 1 � 4 + · · · n =0 This series converges to 2. However, the geometric series ∞ 2 n = 1 + 2 + 4 + 8 + · · · � n =0 n =0 2 n = 2 n +1 − 1 does not converge. However, note that � n
In fact, we can generalize this as follows. Lemma A geometric series converges if and only if the absolute value of the common ratio is less than 1.
More generally, we have the following. Lemma Let A, B be subsets of a finite set U . Then 1 | A ∪ B | = | A | + | B | − | A ∩ B | 2 | A ∩ B | ≤ min {| A | , | B |} 3 | A \ B | = | A | − | A ∩ B | ≥ | A | − | B | 4 | A | = | U | − | A | 5 | A ⊕ B | = | A ∪ B | − | A ∩ B | = A + B − 2 | A ∩ B | = | A \ B | + | B \ A | 6 | A × B | = | A | × | B |
Theorem Let A 1 , A 2 , . . . , A n be finite sets, then � | A 1 ∪ A 2 ∪ · · · ∪ A n | = | A i | i � − | A i ∩ A j | i<j � + | A i ∩ A j ∩ A k | i<j<k − · · · +( − 1) n +1 | A 1 ∩ A 2 ∩ · · · ∩ A n | Each summation is over all i , pairs i, j with i < j , triples i, j, k with i < j < k etc.
The pigeonhole principle states that if there are more pigeons than there are roosts (pigeonholes), for at least one pigeonhole, more than two pigeons must be in it. Theorem (Pigeonhole Principle) If k + 1 or more objects are placed into k boxes, then there is at least one box containing two ore more objects. This is a fundamental tool of elementary discrete mathematics. It is also known as the Dirichlet Drawer Principle .
Theorem If N objects are placed into k boxes then there is at least one box containing at least � N � k Example In any group of 367 or more people, at least two of them must have been born on the same date.
A permutation of a set of distinct objects is an ordered arrangement of these objects. An ordered arrangement of r elements of a set is called an r -permutation . Theorem The number of r permutations of a set with n distinct elements is r − 1 � P ( n, r ) = ( n − i ) = n ( n − 1)( n − 2) · · · ( n − r + 1) i =0
Whereas permutations consider order, combinations are used when order does not matter . Definition An k -combination of elements of a set is an unordered selection of k elements from the set. A combination is simply a subset of cardinality k .
Theorem The number of k -combinations of a set with cardinality n with 0 ≤ k ≤ n is � n � n ! C ( n, k ) = = k ( n − k )! k ! � n � Note: the notation, is read, “ n choose k ”. In T EX use { n k choose k } (with the forward slash).
This is formalized in the following corollary. Corollary Let n, k be nonnegative integers with k ≤ n , then � n � � � n = n − k k
� n � The number of r -combinations, is also called a binomial r coefficient . They are the coefficients in the expansion of the expression (multivariate polynomial), ( x + y ) n . A binomial is a sum of two terms.
Theorem (Binomial Theorem) Let x, y be variables and let n be a nonnegative integer. Then n � n � ( x + y ) n = � x n − j y j j j =0
Many useful identities and facts come from the Binomial Theorem. Corollary n � n � � 2 n = k k =0 n � n � � ( − 1) k = 0 n ≥ 1 k k =0 n � n � � 2 k 3 n = k k =0
Sometimes we are concerned with permutations and combinations in which repetitions are allowed. Theorem The number of r -permutations of a set of n objects with repetition allowed is n r . Easily obtained by the product rule.
Definition A simple graph G = ( V, E ) is a 2-tuple with V = { v 1 , v 2 , . . . , v n } – a finite set of vertices. ons E = V × V = { e 1 , e 2 , . . . , e m } – an unordered set of edges where each e i = ( v, v ′ ) is an unordered pair of vertices, v, v ′ ∈ V . Since V and E are sets, it makes sense to consider their cardinality. As is standard, | V | = n denotes the number of vertices in G and | E | = m denotes the number of edges in G .
A multigraph is a graph in which the edge set E is a multiset. Multiple distinct (or parallel ) edges can exist between vertices. A pseudograph is a graph in which the edge set E can ons have edges of the form ( v, v ) called loops A directed graph is one in which E contains ordered pairs. The orientation of an edge ( v, v ′ ) is said to be “from v to v ′ ”. A directed multigraph is a multigraph whose edges set consists of ordered pairs.
If we look at a graph as a relation then, among other things, Undirected graphs are symmetric . Non-pseudographs are irreflexive . ons Multigraphs have nonnegative integer entries in their matrix; this corresponds to degrees of relatedness. Other types of graphs can include labeled graphs (each edge has a uniquely identified label or weight), colored graphs (edges are colored) etc.
For now, we will concern ourselves with simple, undirected graphs. We now look at some more terminology. Definition ons Two vertices u, v in an undirected graph G = ( V, E ) are called adjacent (or neighbors ) if e = ( u, v ) ∈ E . We say that e is incident with or incident on the vertices u and v . Edge e is said to connect u and v . u and v are also called the endpoints of e .
Definition The degree of a vertex in an undirected graph G = ( V, E ) is the number of edges incident with it. ons The degree of a vertex v ∈ V is denoted deg( v ) In a multigraph, a loop contributes to the degree twice. A vertex of degree 0 is called isolated .
Theorem Let G = ( V, E ) be an undirected graph. Then � 2 | E | = deg( v ) ons v ∈ V The handshake lemma applies even in multi and pseudographs. proof By definition, each e = ( v, v ′ ) will contribute 1 to the degree of each vertex, deg( v ) , deg( v ′ ) . If e = ( v, v ) is a loop then it contributes 2 to deg( v ) . Therefore, the total degree over all vertices will be twice the number of edges.
Complete Graphs – Denoted K n are simple graphs with n vertices where every possible edge is present. Cycle Graphs – Denoted C n are simply cycles on n vertices. ons Wheels – Denoted W n are cycle graphs (on n vertices) with an additional vertex connected to all other vertices. n -cubes – Denoted Q n are graphs with 2 n vertices corresponding to each bit string of length n . Edges connect vertices whose bit strings differ by a single bit. Grid Graphs – finite graphs on the N × N grid.
Definition A graph is called bipartite if its vertex set V can be partitioned ons into two disjoint subsets L, R such that no pair of vertices in L (or R ) is connected. We often use G = ( L, R, E ) to denote a bipartite graph.
We can (partially) decompose graphs by considering subgraphs . Definition A subgraph of a graph G = ( V, E ) is a graph H = ( V ′ , E ′ ) ons where V ′ ⊆ V and E ′ ⊆ E . Subgraphs are simply part(s) of the original graph.
Conversely, we can combine graphs. Definition The union of two graphs G 1 = ( V 1 , E 1 ) and G 2 = ( V 1 , E 1 ) is ons defined to be G = ( V, E ) where V = V 1 ∪ V 2 and E = E 1 ∪ E 2 .
A graph can be implemented as a data structure using one of three representations: ons 1 Adjacency list (vertices to list of vertices) 2 Adjacency matrix (vertices to vertices) 3 Incidence matrix (vertices to edges) These representations can greatly affect the running time of certain graph algorithms.
Adjacency List – An adjacency list representation of a graph G = ( V, E ) maintains | V | linked lists. For each vertex v ∈ V , the head of the list is v and subsequent entries correspond to adjacent vertices v ′ ∈ V . Example ons What is the associated graph of the following adjacency list? v 0 v 2 v 3 v 4 v 1 v 0 v 2 v 2 v 0 v 1 v 3 v 4 v 3 v 1 v 4 v 4 v 1
Advantages: Less storage Disadvantages: Adjacency look up is O ( | V | ) , extra work to maintain vertex ordering (lexicographic) ons Adjacency Matrix – An adjacency matrix representation maintains an n × n sized matrix with entries � 0 if ( v i , v j ) �∈ E a i,j = 1 if ( v i , v j ) ∈ E for 0 ≤ i, j ≤ ( n − 1) .
Example For the same graph in the previous example, we have the following adjacency matrix. 0 0 1 1 1 ons 1 0 1 0 0 1 1 0 1 1 0 1 0 0 1 0 1 0 0 0 Advantages: Adjacency/Weight look up is constant Disadvantages: Extra storage
The entry of 1 for edges e = ( v i , v j ) can be changed to a weight function wt : E → N . Alternatively, entries can be used ons to represent pseudographs. Note that either representation is equally useful for directed and undirected graphs.
We say that a graph is sparse if | E | ∈ O ( | V | ) and dense if | E | ∈ O ( | V | 2 ) . ons A complete graph K n has precisely | E | = n ( n − 1) edges. 2 Thus, for sparse graphs, Adjacency lists tend to be better while for dense graphs, adjacency matrices are better in general .
An isomorphism is a bijection (one-to-one and onto) that preserves the structure of some object. ons In some sense, if two objects are isomorphic to each other, they are essentially the same. Most properties that hold for one object hold for any object that it is isomorphic to. An isomorphism of graphs preserves adjacency.
Recommend
More recommend