1
play

1 Variance of 6 Sided Die Properties of Variance Var(aX + b) = a 2 - PDF document

Welcome to St. Petersburg! Breaking Vegas Game set-up Consider even money bet (e.g., bet Red in roulette) We have a fair coin (come up heads with p = 0.5) p = 18/38 you win $Y, otherwise (1 p) you lose $Y Let n =


  1. Welcome to St. Petersburg! Breaking Vegas • Game set-up • Consider even money bet (e.g., bet “Red” in roulette)  We have a fair coin (come up “heads” with p = 0.5)  p = 18/38 you win $Y, otherwise (1 – p) you lose $Y  Let n = number of coin flips before first “tails”  Consider this algorithm for one series of bets: 1. Y = $1  You win $2 n 2. Bet Y • How much would you pay to play? 3. If Win, stop 4. if Loss, Y = 2 * Y, goto 2 • Solution Let Z = winnings upon stopping   Let X = your winnings 2           18 20 18 20 18         E[Z]             1   2   3   4    i 1  1 ( 2 1 ) ( 4 2 1 ) ... 1 1 1 1 1                           38 38 38 38 38 0 1 2 3 i  E[X] = 2 2 2 2 ... 2           2 2 2 2 2 i   i              20 18 i 18 20 18 1 i 0                i j 1        2 2 1     1             38 38   38 38 38 20     i 0 j 1 i 0 1  0 2 38 i  I’ll let you play for $1 million... but just once! Takers? Expected winnings ≥ 0. Use algorithm infinitely often!  Vegas Breaks You Variance • Why doesn’t everyone do this? • Consider the following 3 distributions (PMFs)  Real games have maximum bet amounts  You have finite money o Not be able to keep doubling bet beyond certain point  Casinos can kick you out • But, if you had:  No betting limits, and • All have the same expected value, E[X] = 3  Infinite money, and • But “spread” in distributions is different  Could play as often as you want... • Variance = a formal quantification of “spread” • Then, go for it!  And tell me which planet you are living on Variance Computing Variance   m 2 • If X is a random variable with mean m then the Var ( X ) E [( X ) ]    m variance of X , denoted Var( X ), is: 2 ( x ) p ( x ) Var( X ) = E [( X – m ) 2 ] x    m  m 2 2 ( x 2 x ) p ( x ) x      m  m • Note: Var( X ) ≥ 0 2 2 x p ( x ) 2 xp ( x ) p ( x ) x x x   m  m 2 2 Ladies and gentlemen, please E [ X ] 2 E [ X ] welcome the 2 nd moment! • Also known as the 2nd Central Moment, or   m  m 2 2 2 E [ X ] 2 square of the Standard Deviation   m 2 ] 2 E [ X   2 2 E [ X ] ( E [ X ]) 1

  2. Variance of 6 Sided Die Properties of Variance • Var(aX + b) = a 2 Var(X) • Let X = value on roll of 6 sided die  Proof: • Recall that E[X] = 7/2 = E[(aX + b) 2 ] – (E[aX + b]) 2 Var(aX + b) • Compute E[X 2 ] = E[a 2 X 2 + 2abX + b 2 ] – (aE[X] + b) 2             1 1 1 1 1 1 91 = a 2 E[X 2 ] + 2abE[X] + b 2 – (a 2 (E[X]) 2 + 2abE[X] + b 2 )        2 2 2 2 2 2 2 E [ X ] 1 2 3 4 5 6 6 6 6 6 6 6 6 = a 2 E[X 2 ] – a 2 (E[X]) 2 = a 2 (E[X 2 ] – (E[X]) 2 )   2 2 Var ( X ) E [ X ] ( E [ X ]) = a 2 Var(X) • Standard Deviation of X, denoted SD(X), is: 2   91 7 35      X  SD ( ) Var ( X )   6 2 12  Var(X) is in units of X 2  SD(X) is in same units as X Jacob Bernoulli Bernoulli Random Variable • Experiment results in “Success” or “Failure” • Jacob Bernoulli (1654-1705), also known as “James”, was a Swiss mathematician  X is random indicator variable (1 = success, 0 = failure) P(X = 0) = p(0) = 1 – p  P(X = 1) = p(1) = p  X is a Bernoulli Random Variable: X ~ Ber(p)  E[X] = p  Var(X) = p(1 – p) • Examples • One of many mathematicians in Bernoulli family  coin flip • The Bernoulli Random Variable is named for him  random binary digit • He is my academic great 11 -grandfather  whether a disk drive crashed • Resemblance to Charlie Sheen weak at best Binomial Random Variable Three Coin Flips • Three fair (“heads” with p = 0.5) coins are flipped • Consider n independent trials of Ber(p) rand. var.  X is number of successes in n trials  X is number of heads  X is a Binomial Random Variable: X ~ Bin(n, p)  X ~ Bin(3, 0.5)     n 3 1               0 3 i n i P ( X 0 ) p ( 1 p ) P ( X i ) p ( i )   p ( 1 p ) i 0 , 1 ,..., n    0  8  i      3 3         ( ) 1 1 2  By Binomial Theorem, we know that P X i P ( X 1 ) p ( 1 p )    1  8  i 0   3 • Examples 3       2 1 P ( X 2 )   p ( 1 p )   2 8  # of heads in n coin flips    # of 1’s in randomly generated length n bit string 3 1       3 0 P ( X 3 ) p ( 1 p )    3  8  # of disk drives crashed in 1000 computer cluster o Assuming disks crash independently 2

  3. Error Correcting Codes Error Correcting Codes (cont) • Error correcting codes • Using error correcting codes: X ~ Bin(7, 0.1)    Have original 4 bit string to send over network 7      0 7 P ( X 0 ) ( 0 . 1 ) ( 0 . 9 ) 0 . 4783    Add 3 “parity” bits, and send 7 bits total  0   Each bit independently corrupted (flipped) in transition   7      1 6 P ( X 1 )   ( 0 . 1 ) ( 0 . 9 ) 0 . 3720 with probability 0.1   1  X = number of bits corrupted: X ~ Bin(7, 0.1)  P(X = 0) + P(X = 1) = 0.8503  But, parity bits allow us to correct at most 1 bit error • What if we didn’t use error correcting codes? • P(a correctable message is received)?  X ~ Bin(4, 0.1)  P(X = 0) + P(X = 1)  P(correct message received) = P(X = 0)   4      0 4 P ( X 0 )   ( 0 . 1 ) ( 0 . 9 ) 0 . 6561   0 • Using error correction improves reliability ~30%! Genetic Inheritance Properties of Bin(n, p) • Person has 2 genes for trait (eye color) • We have X ~ Bin(n, p)  Child receives 1 gene (equally likely) from each parent     n n  n  n           k k i n i k i n i E [ X ] i   p ( 1 p ) i   p ( 1 p )  i   i   Child has brown eyes if either (or both) genes brown   i 0 i 1       n n 1 i n ! n ( n 1 )!  Child only has blue eyes if both genes blue         Noting that: i   n          i  i ! ( n i )! ( i 1 )! (( n 1 ) ( i 1 ))!  i 1   Brown is “dominant” (d) , Blue is recessive (r)        n n 1 n 1 n 1 where              Parents each have 1 brown and 1 blue gene k  k 1 i 1  n i   k 1 j  n ( j 1 ) E [ X ] np i p ( 1 p ) np ( j 1 ) p ( 1 p ) ,         i 1   j  i j 1   i 1 j 0 • 4 children, what is P(3 children with brown eyes)?     k 1 npE [( Y 1 ) ], where Y ~ Bin ( n 1 , p )  Child has blue eyes: p = (½) (½) = ¼ (2 blue genes)  Set k = 1  E[X] = np  P(child has brown eyes) = 1 – (¼) = 0.75  Set k = 2  E[X 2 ] = npE[Y + 1] = np[(n – 1)p + 1]  X = # of children with brown eyes. X ~ Bin(4, 0.75)  Var(X) = np[(n – 1)p + 1] – (np) 2 = np(1 – p)   4      3 1 P ( X 3 )   ( 0 . 75 ) ( 0 . 25 ) 0 . 4219   3 • Note: Ber(p) = Bin(1, p) PMF for X ~ Bin(10, 0.5) PMF for X ~ Bin(10, 0.3) P(X=k) P(X=k) k k 3

  4. Power of Your Vote • Is it better to vote in small or large state?  Small: more likely your vote changes outcome  Large: larger outcome (electoral votes) if state swings  a (= 2n) voters equally likely to vote for either candidate  You are deciding (a + 1) st vote     n   n 2 n 1 1 ( 2 n )!         P ( 2 n voters tie )         2 n n 2 2 n ! n ! 2      Use Stirling’s Approximation: n 1 / 2 n n ! n e 2    2 n 1 / 2 2 n ( 2 n ) e 2 1   P ( 2 n voters tie )    2 n 1 2 n 2 n  n e 2 2 n 1 c 2 a  ( ac )  Power = P(tie) * Elec. Votes =   ( a / 2 )  Larger state = more power 4

Recommend


More recommend