independent random matching
play

Independent Random Matching Darrell Duffie, Stanford University and - PowerPoint PPT Presentation

[ International Congress of Nonstandard Methods in Mathematics , Pisa, 25-31 May 2006] Independent Random Matching Darrell Duffie, Stanford University and Yeneng Sun, National University of Singapore First Prev Next Last Go


  1. [ International Congress of Nonstandard Methods in Mathematics , Pisa, 25-31 May 2006] Independent Random Matching Darrell Duffie, Stanford University and Yeneng Sun, National University of Singapore • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  2. Dynamic Random Matching Let S = { 1 , 2 , . . . , K } be a finite set of types. A discrete-time dynamical system D with random muta- tion, partial matching and type changing The initial distribution of types is p 0 . In each time period n ≥ 1 , • first, each type- k agent randomly mutates to an agent of type l with probability b kl . • Then, each agent of type k is either not matched, with probability q k , or is matched to a type- l agent with a prob- ability proportional to the fraction of type- l agents in the population immediately after the random mutation step. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  3. • When an agent is not matched, she keeps her type. • When a type- k agent is matched with a type- l agent, the type- k agent becomes type r with probability ν kl ( r ) , where ν kl is a probability distribution on S , and similarly for the type- l agent. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  4. • When I is finite, the independence condition cannot be imposed even for static full matchings. • Correlation reduces to zero when the population is large. • Independent random matching in a continuum population (i.e., a non-atomic measure space of agents) is widely used (explicitly and implicitly) in economic literature and also in evolutionary biology. • However, a mathematical foundation has been lacking. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  5. Formal Inductive Definition of Dynamic Random Matching Let α 0 : I → S = { 1 , . . . , K } be an initial type function with distribution p 0 on S . For time period n ≥ 1 , a random mutation is modeled by a process h n from ( I × Ω , I ⊠ F , λ ⊠ P ) to S . Given a K × K probability transition matrix b , we require that, for each agent i ∈ I , � � h n i = l | α n − 1 P = k = b kl , i the specified probability with which an agent i of type k at the end of time period n − 1 mutates to type l . • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  6. Let p n − 1 / 2 be the expected cross-sectional type distribu- tion immediately after the random mutation. The random partial matching function π n at time n is defined by: 1. For any ω ∈ Ω , π n ω ( · ) is a full matching on I − ( π n ω ) − 1 ( { J } ) . 2. Extending h n so that h n ( J, ω ) = J for any ω ∈ Ω , let g n ( i, ω ) = h n ( π n ( i, ω ) , ω ) . 3. Let q ∈ [0 , 1] S . For each agent i ∈ I , P ( g n i = J | h n i = k ) = q k , i = k ) = (1 − q k )(1 − q l ) p n − 1 / 2 P ( g n i = l | h n l . r =1 (1 − q r ) p n − 1 / 2 � K r • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  7. Let ν : S × S → ∆ specify the probability distribution ν kl = ν ( k, l ) of the new type of a type- k agent after she is matched with a type- l agent. We require that the type function α n after the partial matching satisfies, for each agent i ∈ I , P ( α n i = r | h n i = k, g n i = J ) = δ r k , P ( α n i = r | h n i = k, g n i = l ) = ν kl ( r ) , where δ r k is one if r = k , and zero otherwise. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  8. Markov Conditional Independence • an independent random mutation follows from the previ- ous period, • followed by an independent random partial matching, • for matched agents, there is independent random type changing. • Formally, the random mutation is Markov conditionally independent if, for λ -almost all i, j ∈ I , for all types k, l ∈ S i , . . . , α n − 1 j , . . . , α n − 1 P ( h n i = k, h n j = l | α 0 ; α 0 ) i j i = k | α n − 1 j = l | α n − 1 = P ( h n ) P ( h n ) . i j • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  9. Define a mapping Γ from ∆ to ∆ such that, for each p = ( p 1 , . . . , p K ) ∈ ∆ , the r -th component of Γ is K K � � Γ r ( p 1 , . . . , p K ) = q r p m b mr + m =1 k,l =1 ν kl ( r )(1 − q k )(1 − q l ) � K � K m =1 p m b mk j =1 p j b jl . � K t =1 (1 − q t ) � K j =1 p j b jt • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  10. Theorem 1. Let D be any dynamical system with random mutation, partial matching and type changing whose para- meters are ( p 0 , b, q, ν ) that is Markov conditionally inde- pendent. Then: (1) For time n ≥ 1 , the expected cross-sectional type distribution is given by p n = Γ( p n − 1 ) = Γ n ( p 0 ) , and p n − 1 / 2 = � K l =1 b lk p n − 1 , where Γ n is the composition of k l Γ with itself n times, and where p n − 1 / 2 is the expected cross-sectional type distribution after the random mutation. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  11. (2) For λ -almost all i ∈ I , { α n i } ∞ n =0 is a Markov chain with transition matrix z n at time n − 1 defined by (1 − q r )(1 − q j ) p n − 1 / 2 K j z n � kl = q l b kl + ν rj ( l ) b kr . r ′ =1 (1 − q r ′ ) p n − 1 / 2 � K r,j =1 r ′ (3) For λ -almost all i, j ∈ I , the Markov chains { α n i } ∞ n =0 and { α n j } ∞ n =0 are independent. (4) For P -almost all ω ∈ Ω , the cross-sectional type process { α n ω } ∞ n =0 is a Markov chain with transition matrix z n at time n − 1 . • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  12. (5) For P -almost all ω ∈ Ω , at each time period n ≥ 1 , the realized cross-sectional type distribution after the random ω ) − 1 is its expectation p n − 1 / 2 , and the real- mutation λ ( h n ized cross-sectional type distribution at the end of period n , p n ( ω ) = λ ( α n ω ) − 1 , is equal to its expectation p n , and thus, P -almost surely, p n ( ω ) = Γ n ( p 0 ) . (6) There is a stationary distribution p ∗ . That is, with initial cross-sectional type distribution p 0 = p ∗ , for every n ≥ 1 , the realized cross-sectional type distribution p n at time n is p ∗ , P -almost surely, and z n = z 1 . In particular, all of the relevant Markov chains are time-homogeneous with a constant transition matrix having p ∗ as a fixed point. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  13. Theorem 2. Fixing any parameters p 0 for initial cross-sectional type distribution, b for mutation probabilities, q ∈ [0 , 1] S for no-match probabilities, ν for type-changing probabilities, there exists a dynamical system D with random mutation, partial matching and type changing that is Markov conditionally independent with these parameters. • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  14. The six properties in Theorem 1 hold for any Markov con- ditionally independent dynamical matching (not just for the particular examples shown in Theorem 2). That is analogous to the fact that the classical law of large numbers hold for any sequence of random variables satis- fying independence (or uncorrelatedness) with some mo- ment conditions (not just for a particular example showing the existence of a sequence of independent random vari- ables). • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  15. The Proof of Theorem 1 is based on the exact law of large numbers. Let f be any real-valued process on ( I × Ω , I ⊠ F , λ ⊠ P ) . If f is square integrable and essentially uncorrelated, then P ( ω ∈ Ω : E ( f ω ) = E f ) = 1 . Based on that, it is easy to show that if f is essentially pairwise independent, then ω ∈ Ω : λ ( f ω ) − 1 = ( λ ⊠ P ) f − 1 � � P = 1 . Converse law of large numbers: the necessity of uncor- relatedness or independence (both are the standard condi- tions). • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  16. Law of large numbers for *-independent random variables Let { X i } n i =1 be a hyperfinite sequence of ∗ -independent random variables on an internal probability space (Ω , F 0 , P 0 ) with internal mean zero and variances bounded by a common standard positive number C , The elementary Chebyshev’s inequality says that for any positive hyperreal number ǫ , P 0 ( | X 1 + . . . + X n | /n ≥ ǫ ) ≤ C/nǫ 2 , which implies P 0 ( | X 1 + . . . + X n | /n ≃ 0) ≃ 1 . • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

  17. Loeb Transition Probability • ( I, I 0 , λ 0 ) a hyperfinite internal probability space • { (Ω , F 0 , P 0 i ) : i ∈ I } an internal collection of hyperfinite internal probability measures • Define τ 0 on ( I × Ω , I 0 ⊗ F 0 ) by letting τ 0 ( { ( i, ω ) } ) = λ ( { i } ) P 0 i ( { ω } ) for ( i, ω ) ∈ I × Ω . • Let ( I, I , λ ) , (Ω , F i , P i ) , and ( I × Ω , I ⊠ F , τ ) be the Loeb spaces corresponding respectively to ( I, I 0 , λ 0 ) , (Ω , F 0 , P 0 i ) , and ( I × Ω , I 0 ⊗ F 0 , τ 0 ) . • First • Prev • Next • Last • Go Back • Full Screen • Close • Quit

Recommend


More recommend