a characterisation of transient random walks on
play

A characterisation of transient random walks on stochastic matrices - PowerPoint PPT Presentation

A characterisation of transient random walks on stochastic matrices with Dirichlet distributed limits Shaun McKinlay Australia New Zealand Applied Probability Workshop University of Queensland 9 July 2013 Products of random i.i.d. stochastic


  1. A characterisation of transient random walks on stochastic matrices with Dirichlet distributed limits Shaun McKinlay Australia New Zealand Applied Probability Workshop University of Queensland 9 July 2013

  2. Products of random i.i.d. stochastic matrices Let { X ( n ) } n ≥ 1 be a sequence of random i.i.d. d × d stochastic matrices. We consider the limit of the left products X ( n , 1) := X ( n ) X ( n − 1) · · · X (1) as n → ∞ for a certain class of random stochastic matrices X (1). The right product is given by X (1 , n ) := X (1) X (2) · · · X ( n ) d = X ( n , 1) . These products generate the left and right random walks n �→ X ( n , 1) and n �→ X (1 , n ) , respectively.

  3. A theorem by Chamayou and Letac (1994) In Chamayou and Letac (1994) (“CL94”), the authors study the left products for random stochastic matrices X satisfying: [ I ] The rows of X are independent. [ II ] The rows of X are Dirichlet distributed. [ III ] Letting ( α i , 1 , . . . , α i , d ) be the Dirichlet parameters of the i th row of X , we have � d j =1 α i , j = � d j =1 α j , i for i = 1 , . . . , d . They show that the above conditions are sufficient to ensure that: [ A1 ] The products X ( n , 1) converge a.s. to some random matrix � X as n → ∞ . [ A2 ] The limit � X has identical rows a.s. [ A3 ] The rows of � X are Dirichlet distributed. This extends a result by Van Assche (1986) who proved it for d = 2 and all α i , j = p > 0. Volodin, Kotz and Johnson (1993) also independently proved this for all α i , j = p > 0, and any d ≥ 2.

  4. A characterisation theorem It turns out assertions [ A1 ]–[ A3 ] remain true under much broader conditions. Denote by K d the class of all distributions of a random d × d stochastic matrix X such that [ A1 ]–[ A3 ] hold. We extend the result in CL94 by providing a charaterisation theorem for the class K d .

  5. Some notation We denote matrix row and column sums of A = ( α i , j ) r c j =1 by i =1 p i • := � c j =1 p i , j for i = 1 , . . . , r p • j := � r i =1 p i , j for j = 1 , . . . , c For a vector ( y 1 , . . . , y c ), we denote the sum of its components by y • := � c i =1 y i , and set R + := (0 , ∞ ).

  6. For a vector a = ( a 1 , . . . , a d ) ∈ R d + , we denote by D a : the Dirichlet distribution with parameter vector a G a : the distribution Γ a 1 ⊗ · · · ⊗ Γ a d , that is ( Z 1 , . . . , Z d ) ∼ G a iff all Z i ∼ Γ a i , and Z 1 , . . . , Z d are independent In what follows, the matrix A = ( α i , j ) r c j =1 will be the set of i =1 parameters for the following distributions: D A : the law of the matrix X = ( X i , j ), with X ( i ) := ( X i , 1 , . . . , X i , c ) ∼ D ( α i , 1 ,...,α i , c ) and X (1) , . . . , X ( r ) are independent. G A : the law of the matrix Z = ( Z i , j ), such that Z ( i ) ∼ G ( α i , 1 ,...,α i , c ) and Z (1) , . . . , Z ( r ) are independent.

  7. Chamayou and Letac’s first theorem and our extension The following theorem is the first main result in CL94: Theorem If ( Y , X ) ∼ D ( α 1 • ,...,α r • ) ⊗ D A , then Y X ∼ D ( α • 1 ,...,α • c ) . We extend this theorem as follows: Theorem Let t = ( t 1 , . . . , t r ) ∈ R r + and s = ( s 1 , . . . , s c ) ∈ R c + with t • = s • . Suppose X is an r × c non-negative random matrix independent of both Y ∼ D t and V ∼ G t . Then Y X ∼ D s V X ∼ G s . iff

  8. Two properties of the gamma and Dirichlet distributions Let Z = ( Z 1 , . . . , Z d ) ∼ G t for some t ∈ R d + . Then � Z 1 � , . . . , Z d ∼ D t . (1) Z • Z • The second property is � Z 1 � , . . . , Z d ( Z 1 , . . . , Z d ) d � = Z • , (2) Z • Z • where ( � Z 1 , . . . , � Z d ) is an independent copy of Z .

  9. Our theorem is indeed an extension For ( V , Z ) ∼ G ( α 1 • ,...,α r • ) ⊗ G A , then property (1) implies that   Z 1 , 1 Z 1 , c · · · Z 1 • Z 1 •   . . ...   . . X :=  ∼ D A is independent of V . . .  Z r , 1 Z r , c · · · Z r • Z r • Now � Z k , 1 � r � , . . . , Z k , c V X = V k Z k • Z k • k =1 r � d = ( Z k , 1 , . . . , Z k , c ) = ( Z • 1 , . . . , Z • c ) ∼ G ( α • 1 ,...,α • c ) . k =1 It follows from our theorem that for a random vector Y satisfying ( Y , X ) ∼ D ( α 1 • ,...,α r • ) ⊗ D A , one has Y X ∼ D ( α • 1 ,...,α • c ) .

  10. A theorem by Pitman The proof our extension to the first main theorem in CL94 is based on an extension of the following remarkable observation from Pitman (1937). Let Z = ( Z 1 , . . . , Z d ) ∼ G t , and f : R d → R be a scale independent function, i.e., for any a � = 0, f ( ax 1 , . . . , ax d ) ≡ f ( x 1 , . . . , x d ) . Then f ( Z ) is independent of Z • .

  11. An extension of Pitman’s theorem Lemma Let (Ω , F , P ) be a probability space, ( E , E ) a measurable space, and X : Ω → E a random element. Suppose H : R r × E → [0 , ∞ ) is jointly measurable and, for any a � = 0 and ω ∈ Ω , H ( ay 1 , . . . , ay r , X ( ω )) = H ( y 1 , . . . , y r , X ( ω )) for all ( y 1 , . . . , y r ) ∈ R r . If V = ( V 1 , . . . , V r ) ∼ G t , t ∈ R r + , is independent of X, then V • is independent of H ( V , X ) . To prove this lemma we show that the joint Laplace transform φ ( s , u ) = E e − sV • − uH ( V , X ) can be expressed as the product of two functions, one depending on s , and the other on u .

  12. A proof of our first theorem For the forward implication, let V X ∼ G s for V ∼ G t independent of X . The previous lemma implies that for the function H ( v 1 , . . . , v r , X ) = ( H 1 ( v 1 , . . . , v r , X ) , . . . , H c ( v 1 , . . . , v r , X )) defined by r � v i X i , j H j ( v 1 , . . . , v r , X ) := , 1 ≤ j ≤ c , v • i =1 � � V 1 V • , . . . , V r the random vector H ( V 1 , . . . , V r , X ) ≡ X is V • independent of V • .

  13. Therefore � V 1 � � V 1 � , . . . , V r , . . . , V r d X � V X = XV • = V • , V • V • V • V • where ( � V 1 , . . . , � V r ) ∼ G t is independent of ( V , X ). Since V X ∼ G s , for Z := ( Z 1 , . . . , Z c ) ∼ G s , one has � Z 1 � , . . . , Z c V X d = Z d � = Z • , Z • Z • ( � Z 1 , . . . , � Z c ) being an independent copy of Z .

  14. Taking logarithms on the components of the vectors above � �� r � �� r �� i =1 V i X i , 1 i =1 V i X i , c + ln( � ln , . . . , ln V • )(1 , . . . , 1) V • V • � � Z 1 � � Z c �� d + ln( � = ln , . . . , ln Z • )(1 , . . . , 1) , Z • Z • d Since t • = s • , one has � = � V • Z • , and so letting ψ , ϕ and χ denote the characteristic functions of the first, second (and fourth), and third terms above, respectively, we have ψ ( u 1 , . . . , u c ) ϕ ( u 1 , . . . , u c ) = χ ( u 1 , . . . , u c ) ϕ ( u 1 , . . . , u c ) .

  15. We conclude that ψ ≡ χ (since ϕ ( u 1 , . . . , u c ) � = 0), and therefore � V 1 � �� r � r � , . . . , V r i =1 V i X i , 1 i =1 V i X i , c X = , . . . , V • V • V • V • � Z 1 � , . . . , Z c d = ∼ D s Z • Z • Since the left hand side above has the form Y X for Y ∼ D t independent of X , we have Y X ∼ D s as required. One can obtain the backward implication by reversing these steps.

  16. Chamayou and Letac’s transient random walk The following theorem is the second main result in CL94. Theorem If r = c = d, X ∼ D A , and ( α 1 • , . . . , α d • ) = ( α • 1 , . . . , α • d ) , X (1) ∼ D ( α 1 • ,...,α d • ) . Furthermore, if Y is a then L ( X ) ∈ K d , and � random vector in the d-dimensional simplex that is independent of X, then Y X d = Y iff Y d = � X (1) .

  17. An characterisation theorem for L ( X ) ∈ K d The following theorem is an extension of the second main theorem in CL94. Theorem (i) L ( X ) ∈ K d iff [ C1 ] there exists a t ∈ R d + such that, for a random vector V ∼ G t independent of X, one has V X d = V ; and [ C2 ] for an i.i.d. sequence { X ( n ) } n ≥ 1 with X (1) d = X, ∃ m < ∞ such that P ( X ( m , 1) is positive ) > 0 . X (1) ∼ D t , where the vector t is the same (ii) If L ( X ) ∈ K d , then � as in [ C1 ] , and if Y is a random vector in the d-dimensional simplex that is independent of X, then Y X d = Y iff Y d = � X (1) .

  18. Random exchange models Suppose we have d < ∞ bins holding amounts q k ( n ), k = 1 , . . . , d , of a homogeneous commodity at times n = 0 , 1 , 2 , . . . , respectively. The dynamics of the model are as follows: at time n ≥ 1, the vector q ( n − 1) := ( q 1 ( n − 1) , . . . , q d ( n − 1)) changes to q ( n ) := q ( n − 1) X ( n ) . Then q ( n ) = q (0) X (1 , n ) , n ≥ 1 , is a Markov chain, with stationary distribution L ( � X (1) ) (where we assume w.l.o.g. that � d k =1 q k (0) = 1).

Recommend


More recommend