CSE 312 Final Review: Section AA CSE 312 TAs December 8, 2011 CSE 312 Final Review: Section AA
General Information CSE 312 Final Review: Section AA
General Information Comprehensive Midterm CSE 312 Final Review: Section AA
General Information Comprehensive Midterm Heavily weighted toward material after the midterm CSE 312 Final Review: Section AA
Pre-Midterm Material CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P ( A | B ) = P ( AB ) P ( B ) CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P ( A | B ) = P ( AB ) P ( B ) Law of Total Probability: P ( A ) = P ( A | B ) · P ( B ) + P ( A | B ) · P ( B ) CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P ( A | B ) = P ( AB ) P ( B ) Law of Total Probability: P ( A ) = P ( A | B ) · P ( B ) + P ( A | B ) · P ( B ) Bayes’ Theorem: P ( A | B ) = P ( B | A ) · P ( A ) P ( B ) CSE 312 Final Review: Section AA
Pre-Midterm Material Basic Counting Principles Pigeonhole Principle Inclusion Exclusion Counting the Complement Using symmetry Conditional Probability P ( A | B ) = P ( AB ) P ( B ) Law of Total Probability: P ( A ) = P ( A | B ) · P ( B ) + P ( A | B ) · P ( B ) Bayes’ Theorem: P ( A | B ) = P ( B | A ) · P ( A ) P ( B ) Network Failure Questions CSE 312 Final Review: Section AA
Pre-Midterm Material CSE 312 Final Review: Section AA
Pre-Midterm Material Independence CSE 312 Final Review: Section AA
Pre-Midterm Material Independence E and F are independent if P ( EF ) = P ( E ) P ( F ) CSE 312 Final Review: Section AA
Pre-Midterm Material Independence E and F are independent if P ( EF ) = P ( E ) P ( F ) E and F are independent if P ( E | F ) = P ( E ) and P ( F | E ) = P ( F ) CSE 312 Final Review: Section AA
Pre-Midterm Material Independence E and F are independent if P ( EF ) = P ( E ) P ( F ) E and F are independent if P ( E | F ) = P ( E ) and P ( F | E ) = P ( F ) Events E 1 , . . . E n are independent if for every subset S of events �� � � = P ( E i ) P E i i ∈ S i ∈ S CSE 312 Final Review: Section AA
Pre-Midterm Material Independence E and F are independent if P ( EF ) = P ( E ) P ( F ) E and F are independent if P ( E | F ) = P ( E ) and P ( F | E ) = P ( F ) Events E 1 , . . . E n are independent if for every subset S of events �� � � = P ( E i ) P E i i ∈ S i ∈ S Biased coin example from Lecture 5, slide 7 CSE 312 Final Review: Section AA
Pre-Midterm Material Independence E and F are independent if P ( EF ) = P ( E ) P ( F ) E and F are independent if P ( E | F ) = P ( E ) and P ( F | E ) = P ( F ) Events E 1 , . . . E n are independent if for every subset S of events �� � � = P ( E i ) P E i i ∈ S i ∈ S Biased coin example from Lecture 5, slide 7 If E and F are independent and G is an arbitrary event then in general P ( EF | G ) � = P ( E | G ) · P ( F | G ) CSE 312 Final Review: Section AA
Pre-Midterm Material Independence E and F are independent if P ( EF ) = P ( E ) P ( F ) E and F are independent if P ( E | F ) = P ( E ) and P ( F | E ) = P ( F ) Events E 1 , . . . E n are independent if for every subset S of events �� � � = P ( E i ) P E i i ∈ S i ∈ S Biased coin example from Lecture 5, slide 7 If E and F are independent and G is an arbitrary event then in general P ( EF | G ) � = P ( E | G ) · P ( F | G ) For any given G , equality in the above statement means that E and F are Conditionally Independent given G CSE 312 Final Review: Section AA
Distributions CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var [ aX + b ] = a 2 Var [ X ]; in general Var [ X + Y ] � = Var [ X ] + Var [ Y ]). CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var [ aX + b ] = a 2 Var [ X ]; in general Var [ X + Y ] � = Var [ X ] + Var [ Y ]). Remember: For any a , P ( X = a ) = 0 (the probability that a continuous R.V. falls at a specific point is 0!) CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var [ aX + b ] = a 2 Var [ X ]; in general Var [ X + Y ] � = Var [ X ] + Var [ Y ]). Remember: For any a , P ( X = a ) = 0 (the probability that a continuous R.V. falls at a specific point is 0!) � ∞ Expectation is now an integral: E [ X ] = −∞ x · f ( x ) dx CSE 312 Final Review: Section AA
Distributions Know the mean and variance for: Uniform distribution Normal distribution Geometric distribution Binomial distribution Poisson distribution Hypergeometric distribution Remember Linearity of Expectation and other useful facts (e.g. Var [ aX + b ] = a 2 Var [ X ]; in general Var [ X + Y ] � = Var [ X ] + Var [ Y ]). Remember: For any a , P ( X = a ) = 0 (the probability that a continuous R.V. falls at a specific point is 0!) � ∞ Expectation is now an integral: E [ X ] = −∞ x · f ( x ) dx Use normal approximation when applicable. CSE 312 Final Review: Section AA
Central Limit Theorem CSE 312 Final Review: Section AA
Central Limit Theorem Central Limit Theorem: Consider i.i.d. (independent, identically distributed) random variables X 1 , X 2 , . . . . Xi has µ = E [ X i ] and σ 2 = Var [ X i ]. Then, as n → ∞ X 1 + · · · + X n − n µ σ √ n → N (0 , 1) Alternatively n µ, σ 2 X = 1 � � � X i ≈ N n n i =1 CSE 312 Final Review: Section AA
Tail Bounds CSE 312 Final Review: Section AA
Tail Bounds Markov’s Inequality : If X is a non-negative random variable, then for every α > 0, we have P ( X ≥ α ) ≤ E [ X ] α CSE 312 Final Review: Section AA
Tail Bounds Markov’s Inequality : If X is a non-negative random variable, then for every α > 0, we have P ( X ≥ α ) ≤ E [ X ] α Corollary P ( X ≥ α E [ X ]) ≤ 1 α CSE 312 Final Review: Section AA
Tail Bounds Markov’s Inequality : If X is a non-negative random variable, then for every α > 0, we have P ( X ≥ α ) ≤ E [ X ] α Corollary P ( X ≥ α E [ X ]) ≤ 1 α Chebyshev’s Inequality : If Y is an arbitrary random variable with E [ Y ] = µ , then, for any α > 0, P ( | Y − µ | ≥ α ) ≤ Var [ Y ] α 2 CSE 312 Final Review: Section AA
Tail Bounds CSE 312 Final Review: Section AA
Recommend
More recommend