Fundamental Facts about Conditional Expectation Example : π : height of the chosen human being π : country of origin of the chosen human being π : gender of the chosen human being π½ π = π½ π½ π | π average height of all human beings = weighted average of the country-by-country average heights π½ π | π = π½ π½ π | π, π | π average height of all male/female human beings = weighted average of the country-by-country average male/female heights π½ π½ π π π π, π | π = π½ π π π½ π π, π | π
Fundamental Facts about Conditional Expectation Example : π : height of the chosen human being π : country of origin of the chosen human being π : gender of the chosen human being π½ π = π½ π½ π | π average height of all human beings = weighted average of the country-by-country average heights π½ π | π = π½ π½ π | π, π | π average height of all male/female human beings = weighted average of the country-by-country average male/female heights π½ π½ π π π π, π | π = π½ π π π½ π π, π | π once π is fixed to some π¦ , π½ π π π π, π | π = π¦ = π(π¦)π½ π π, π | π = π¦
Fundamental Facts about Conditional Expectation (Cont.) π½ π = π½ π½ π | π π½ π | π = π½ π½ π | π, π | π π½ π½ π π π π, π | π = π½ π π π½ π π, π | π
Fundamental Facts about Conditional Expectation (Cont.) π½ π = π½ π½ π | π π½ π | π = π½ π½ π | π, π | π π½ π½ π π π π, π | π = π½ π π π½ π π, π | π Generalization to the multivariate case: π½ π = π½ π½ π | Τ¦ π π½ π | Τ¦ π = π½ π½ π | Τ¦ π, Τ¦ π | Τ¦ π π½ π½ π Τ¦ π π Τ¦ π, π | Τ¦ = π½ π Τ¦ π π½ π Τ¦ π, π | Τ¦ π π
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ πβ1 2 π = 1 when you get a win after π losses: 2 π β Ο π=0
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ πβ1 2 π = 1 when you get a win after π losses: 2 π β Ο π=0 consider a fair game, with any betting strategy
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ πβ1 2 π = 1 when you get a win after π losses: 2 π β Ο π=0 consider a fair game, with any betting strategy let π π be our wealth after π rounds
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ πβ1 2 π = 1 when you get a win after π losses: 2 π β Ο π=0 consider a fair game, with any betting strategy let π π be our wealth after π rounds π½ π π+1 | π 0 , π 1 , β― , π π =
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ πβ1 2 π = 1 when you get a win after π losses: 2 π β Ο π=0 consider a fair game, with any betting strategy let π π be our wealth after π rounds π½ π π+1 | π 0 , π 1 , β― , π π = π π
Martingales originally refers to a betting strategy: βdouble your bet after every lossβ πβ1 2 π = 1 when you get a win after π losses: 2 π β Ο π=0 consider a fair game, with any betting strategy let π π be our wealth after π rounds π½ π π+1 | π 0 , π 1 , β― , π π = π π since the game is fair , conditioned on past history, we expect no change to current value after one round
Martingales
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Coin Flipping toss a fair coin for many times measure the differences between # of heads and # of tails
Example: Random Walk a dot starting from the origin in each step, move equiprobably to one of four neighbors
Example: Random Walk a dot starting from the origin in each step, move equiprobably to one of four neighbors after π steps, use π π to denote # of hops to origin (Manhattan distance)
Example: Random Walk a dot starting from the origin in each step, move equiprobably to one of four neighbors after π steps, use π π to denote # of hops to origin (Manhattan distance)
Example: Random Walk a dot starting from the origin in each step, move equiprobably to one of four neighbors after π steps, use π π to denote # of hops to origin (Manhattan distance)
Example: Random Walk a dot starting from the origin in each step, move equiprobably to one of four neighbors after π steps, use π π to denote # of hops to origin (Manhattan distance) How far the dot is away from the origin after π steps?
Azumaβs Inequality
Azumaβs Inequality π 0 , π 1 , β― are not necessarily independent
Azumaβs Inequality in Action After π steps, use π π to denote # of hops to origin (Manhattan distance) How large is π π ?
Azumaβs Inequality in Action After π steps, use π π to denote # of hops to origin (Manhattan distance) How large is π π ?
Azumaβs Inequality in Action After π steps, use π π to denote # of hops to origin (Manhattan distance) How large is π π ? We know π 0 = 0 , and π π β π πβ1 β€ 1
Azumaβs Inequality in Action After π steps, use π π to denote # of hops to origin (Manhattan distance) How large is π π ? We know π 0 = 0 , and π π β π πβ1 β€ 1
Azumaβs Inequality in Action After π steps, use π π to denote # of hops to origin (Manhattan distance) How large is π π ? We know π 0 = 0 , and π π β π πβ1 β€ 1 Within Ξ( π log π) w.h.p.
Azumaβs Inequality
Azumaβs Inequality For a sequence of r.v., if in each step: * on average make no change to current value ( martingale ) * no big jump ( bounded difference )
Azumaβs Inequality For a sequence of r.v., if in each step: * on average make no change to current value ( martingale ) * no big jump ( bounded difference ) Then final value does not deviate a lot from the initial value.
Proving Azumaβs Inequality
Proving Azumaβs Inequality Use similar strategy as in proving Chernoff bounds:
Proving Azumaβs Inequality Use similar strategy as in proving Chernoff bounds: (a ) Apply generalized Markovβs inequality to MGF
Proving Azumaβs Inequality Use similar strategy as in proving Chernoff bounds: (a ) Apply generalized Markovβs inequality to MGF (b) * Bound the value of MGF (use Hoeffdingβs lemma)
Proving Azumaβs Inequality Use similar strategy as in proving Chernoff bounds: (a ) Apply generalized Markovβs inequality to MGF (b) * Bound the value of MGF (use Hoeffdingβs lemma) (c) Optimize the value of MGF
Proving Azumaβs Inequality
Proving Azumaβs Inequality
Proving Azumaβs Inequality
Proving Azumaβs Inequality
for π > 0
for π > 0
for π > 0
for π > 0 π½ π = π½ π½ π | π
for π > 0 π½ π½ π π π π, π |π = π½ π π π½ π π, π |π
for π > 0
for π > 0
for π > 0
for π > 0
for π > 0
for π > 0
for π > 0
for π > 0
for π > 0 π’ minimized when π = π 2 Ο π=1 π π
Proving Azumaβs Inequality
Proving Azumaβs Inequality ???
Proving Azumaβs Inequality ???
Generalized Martingales
Generalized Martingales betting on a fair game π π : gain/loss of the i th bet π : wealth after the i th bet π
Generalized Martingales betting on a fair game π π : gain/loss of the i th bet π : wealth after the i th bet β martingale (since game is fair) π
Generalized Azumaβs Inequality
Azumaβs Inequality martingale π 0 , π 1 , β― , π π martingale π 0 , π 1 , β― π½ π π π 0 , π 1 , β― , π πβ1 ) = π πβ1 with π π β π πβ1 β€ π π , generalization then β π π β π 0 β₯ π’ β€ β― martingale π 0 , π 1 , β― w.r.t. π 0 , π 1 , β― generalization π π = π(π 0 , π 1 , β― , π π ) π½ π π 0 , π 1 , β― , π πβ1 ) = π Generalized Azumaβs Inequality π πβ1 martingale π 0 , π 1 , β― w.r.t. π 0 , π 1 , β― with π π β π πβ1 β€ π π , then β π π β π 0 β₯ π’ β€ β―
Doob Sequence
Doob Sequence π( ) , , ,
Doob Sequence π( ) , , , average over no information π½ π
Recommend
More recommend