Eleni Eleni Vatamidou, atamidou, Ivo Ivo Adan, Adan, Ma Maria ria Vlasiou, Vlasiou, and and Bert Bert Zwart rt Asymptotic Asymptotic erro rror bounds ounds fo for truncated truncated buffer buffer app appro roxi ximati ation ons of of a 2-no 2-node de tan tandem dem queue queue MAM-9 Budapest, June 29, 2016
Tandem network: M X / M / 1 → • / M / 1 µ 1 µ 2 λ (batch) 1 / 26
Tandem network: M X / M / 1 → • / M / 1 µ 1 µ 2 λ (batch) ◮ B r.v. for the batch sizes; E B = � ∞ i =1 ip i < ∞ . ◮ Assumption: λ E B /µ i < 1, i = 1 , 2 ◮ Uniformisation: λ + µ 1 + µ 2 = 1 ◮ X n and Y n queue lengths (including service) at the n th jump epoch, s.t. ( X n , Y n ) ∈ N 2 1 / 26
Transition diagram of the QBD m 2 μ 1 λ p 1 λ p 2 λ p 3 λ p 4 λ p i μ 2 m 1 B A 0 0 0 · · · A 2 A 1 A 0 0 · · · 0 A 2 A 1 A 0 · · · Infinitesimal generator: Q = . 0 0 A 2 A 1 · · · . . . . ... . . . . . . . . 2 / 26
Matrix-analytic methods – MAM For an irreducible and positive recurrent Markov chain, there exists a unique π Q = 0 , π e = 1. 3 / 26
Matrix-analytic methods – MAM For an irreducible and positive recurrent Markov chain, there exists a unique π Q = 0 , π e = 1. The stationary distribution If we partition π by level (1st coordinate) to the sub-vectors π n , n ≥ 0, then π 0 B + π 1 A 2 = 0 , π n − 1 A 0 + π n A 1 + π n +1 A 2 = 0 , n ≥ 1 , � π n e = 1 . n ≥ 0 where each π n is ( N + 1)-dimensional. 3 / 26
Matrix-analytic methods – MAM For an irreducible and positive recurrent Markov chain, there exists a unique π Q = 0 , π e = 1. The stationary distribution If we partition π by level (1st coordinate) to the sub-vectors π n , n ≥ 0, then π 0 B + π 1 A 2 = 0 , π n − 1 A 0 + π n A 1 + π n +1 A 2 = 0 , n ≥ 1 , � π n e = 1 . n ≥ 0 where each π n is ( N + 1)-dimensional. Requirement : finite number of phases (2nd coordinate) 3 / 26
Evaluation of ( X ∞ , Y ∞ ) ◮ ( X 0 , Y 0 ) = (0 , 0) initial state ◮ T (0 , 0) = inf { n ≥ 1 : X n = Y n = 0 | X 0 = Y 0 = 0 } , return time to the origin or cycle length � T (0 , 0) � 1 � � � P X ∞ ≥ x , Y ∞ ≥ y = E 1 ( X n ≥ x , Y n ≥ y ) E T (0 , 0) n =1 4 / 26
Evaluation of ( X ∞ , Y ∞ ) ◮ ( X 0 , Y 0 ) = (0 , 0) initial state ◮ T (0 , 0) = inf { n ≥ 1 : X n = Y n = 0 | X 0 = Y 0 = 0 } , return time to the origin or cycle length � T (0 , 0) � 1 � � � P X ∞ ≥ x , Y ∞ ≥ y = E 1 ( X n ≥ x , Y n ≥ y ) E T (0 , 0) n =1 � T (0 , 0) � � � 1 � = 1 ( X n ≥ x , Y n ≥ y ) · 1 max X l < N E E T (0 , 0) 1 ≤ l ≤ T (0 , 0) n =1 � �� � = I 4 / 26
Evaluation of ( X ∞ , Y ∞ ) ◮ ( X 0 , Y 0 ) = (0 , 0) initial state ◮ T (0 , 0) = inf { n ≥ 1 : X n = Y n = 0 | X 0 = Y 0 = 0 } , return time to the origin or cycle length � T (0 , 0) � 1 � � � P X ∞ ≥ x , Y ∞ ≥ y = E 1 ( X n ≥ x , Y n ≥ y ) E T (0 , 0) n =1 � T (0 , 0) � � � 1 � = 1 ( X n ≥ x , Y n ≥ y ) · 1 max X l < N E E T (0 , 0) 1 ≤ l ≤ T (0 , 0) n =1 � �� � = I � T (0 , 0) � � � 1 � + E 1 ( X n ≥ x , Y n ≥ y ) · 1 max X l ≥ N . E T (0 , 0) 1 ≤ l ≤ T (0 , 0) n =1 � �� � = II 4 / 26
Truncation of the state space m 2 μ 1 ∞ λ p 1 λ p 2 λ p 3 λ p 4 μ 2 λ Σ Pi i=N ‐ m 1 N m 1 � T ( N ) � � � (0 , 0) � � � X ( N ) X ( N ) ≥ x , Y ( N ) I = E ≥ y · 1 max < N 1 n n l 1 ≤ l ≤ T ( N ) n =1 (0 , 0) � T ( N ) � � (0 , 0) � � � � = E T ( N ) X ( N ) ≥ x , Y ( N ) X ( N ) ≥ x , Y ( N ) ≤ E ≥ y ≥ y . 1 (0 , 0) P n n ∞ ∞ n =1 5 / 26
Exceeding the truncation level � T (0 , 0) � � � � II = E 1 ( X n ≥ x , Y n ≥ y ) · 1 max X l ≥ N 1 ≤ l ≤ T (0 , 0) n =1 � � � � � � � � M T (0 , 0) ≥ N ≤ E T (0 , 0) · 1 max X l ≥ N = E T (0 , 0) · 1 1 ≤ l ≤ T (0 , 0) � � � � � � M T (0 , 0) ≥ N M T (0 , 0) ≥ N = E T (0 , 0) . P 6 / 26
Exceeding the truncation level � T (0 , 0) � � � � II = E 1 ( X n ≥ x , Y n ≥ y ) · 1 max X l ≥ N 1 ≤ l ≤ T (0 , 0) n =1 � � � � � � � � M T (0 , 0) ≥ N ≤ E T (0 , 0) · 1 max X l ≥ N = E T (0 , 0) · 1 1 ≤ l ≤ T (0 , 0) � � � � � � M T (0 , 0) ≥ N M T (0 , 0) ≥ N = E T (0 , 0) . P Theorem: Upper and lower bounds for the approximation � � � � X ( N ) ≥ x , Y ( N ) 0 ≤ P X ∞ ≥ x , Y ∞ ≥ y − P ≥ y ∞ ∞ � � M T (0 , 0) ≥ N � P � � � M T (0 , 0) ≥ N ≤ E . T (0 , 0) E T (0 , 0) 6 / 26
Asymptotic upper bound Main theorem As N → ∞ , � � � � X ( N ) ≥ x , Y ( N ) � KNe − γ N , X ∞ ≥ x , Y ∞ ≥ y − P ≥ y P ∞ ∞ where � µ 1 − µ 2 ) + + ( µ 1 − µ 2 ) + 1 � (˘ � 1 K = µ 2 − λ E B · + λ ˘ ˘ ˘ λ ˘ µ 1 − λ E B E B − ˘ µ 1 E B − ˘ µ 1 � � � 1 1 − λ E B + × C 1 e γ , µ 1 − λ E B µ 1 and C 1 is a constant. 7 / 26
Proof � � M T (0 , 0) ≥ N Step 1: Limit for the probability P ◮ T 0 = inf { n ≥ 1 : X n = 0 | X 0 = 0 } ◮ from extreme value theory: T (0 , 0) E T 0 M T 0 max i =1 ,..., ≈ max i =1 ,..., n X i ≈ max i =1 ,..., E T (0 , 0) M n n i i ◮ result: � � � � M T (0 , 0) ≥ N M T 0 ≥ N P ∼ P . E T (0 , 0) E T 0 8 / 26
Proof � � M T (0 , 0) ≥ N Step 1: Limit for the probability P ◮ T 0 = inf { n ≥ 1 : X n = 0 | X 0 = 0 } ◮ from extreme value theory: T (0 , 0) E T 0 M T 0 max i =1 ,..., ≈ max i =1 ,..., n X i ≈ max i =1 ,..., E T (0 , 0) M n n i i ◮ result: � � � � M T (0 , 0) ≥ N M T 0 ≥ N P ∼ P . E T (0 , 0) E T 0 � � M T 0 ≥ N Step 2: Limit for the probability P ◮ a conspiracy leads to a maximum value N ◮ an exponential change of measure gives ˘ λ , ˘ P ( B = n ), ˘ µ 1 , and µ 2 ( γ is the solution of the Lundberg equation). ˘ � � M T 0 ≥ N er-Lundberg approximation : e γ ( N − 1) P ◮ Cram´ → C 1 8 / 26
Proof (continued) � � ◮ ergodicity of X n gives: E T 0 = 1 / P X ∞ = 0 � � ◮ Little’s formula : P X ∞ = 0 = 1 − ρ 1 = 1 − λ E B /µ 1 . 9 / 26
Proof (continued) � � ◮ ergodicity of X n gives: E T 0 = 1 / P X ∞ = 0 � � ◮ Little’s formula : P X ∞ = 0 = 1 − ρ 1 = 1 − λ E B /µ 1 . Main theorem As N → ∞ , � � � � X ( N ) ≥ x , Y ( N ) � KNe − γ N , X ∞ ≥ x , Y ∞ ≥ y − P ≥ y P ∞ ∞ where � µ 1 − µ 2 ) + + ( µ 1 − µ 2 ) + � (˘ � 1 1 K = µ 2 − λ E B · + λ ˘ ˘ ˘ λ ˘ µ 1 − λ E B E B − ˘ µ 1 E B − ˘ µ 1 � � � 1 1 − λ E B + × C 1 e γ , µ 1 − λ E B µ 1 and C 1 is a constant. 9 / 26
Proof (continued) � � � � M T (0 , 0) ≥ N Step 3: The conditional expectation E T (0 , 0) 10 / 26
Proof (continued) � � � � M T (0 , 0) ≥ N Step 3: The conditional expectation E T (0 , 0) # Q 1 N λ E B − µ 1 µ 1 E B − ˘ λ ˘ ˘ τ 1 τ 2 time 0 T (0 , 0) # Q 2 µ 1 µ 1 ˘ λ E B τ 1 τ 2 time 0 T (0 , 0) 10 / 26
Proof (continued) # Q 1 N λ E B − µ 1 E B − ˘ µ 1 λ ˘ ˘ τ 1 τ 2 time 0 T (0 , 0) µ 1 µ 1 ˘ λ E B # Q 2 λ E B − µ 2 µ 1 − µ 2 h 2 τ 1 τ 2 τ 3 time 0 T (0 , 0) 11 / 26
Proof (continued) # Q 1 N λ E B − µ 1 E B − ˘ µ 1 λ ˘ ˘ τ 1 τ 2 time 0 T (0 , 0) µ 1 µ 1 ˘ λ E B # Q 2 − µ λ E B − µ 2 2 µ 1 h 2 µ 2 − µ 1 ˘ h 1 τ 1 τ 2 τ 3 time 0 T (0 , 0) 12 / 26
Proof (continued) Distributions of the jumps/connection with random walks 0 , with probability µ 2 , Z n = − 1 , with probability µ 1 , m , with probability λ p m , m = 1 , 2 , . . . , − 1 , if Z n = 0 , W n = 1 , if Z n = − 1 and X n − 1 > 0 , 0 , else . 13 / 26
Behaviour in the time interval [0 , τ 1 ] # Q 1 N λ E B − µ 1 E B − ˘ µ 1 λ ˘ ˘ τ 1 τ 2 time 0 T (0 , 0) µ 1 µ 1 ˘ λ E B # Q 2 − µ λ E B − µ 2 2 µ 1 h 2 µ 2 − µ 1 ˘ h 1 τ 1 τ 2 τ 3 time 0 T (0 , 0) 14 / 26
Behaviour in the time interval [0 , τ 1 ] Proposition (for Q1) As N → ∞ , 1 � � � � � � M T (0 , 0) ≥ N τ 1 = N + o ( N ) . E λ ˘ ˘ E B − ˘ µ 1 Proof. � ˘ � λ ˘ ◮ Let z be s.t. z > 1 / E B − ˘ µ 1 . Then, � z � τ 1 � � � � � � � � τ 1 < T (0 , 0) = τ 1 > yN � τ 1 < T (0 , 0) E P dy � N 0 � ∞ � � � � + P τ 1 > yN � τ 1 < T (0 , 0) dy . z � τ 1 � 1 ˘ ◮ change of measure and use of lim = E ˘ λ ˘ N E B − ˘ µ 1 N →∞ 15 / 26
Behaviour in the time interval [0 , τ 1 ] Proposition (for Q2) As N → ∞ , µ 1 − µ 2 ) + ≤ (˘ � � � � M T (0 , 0) ≥ N Y τ 1 N + o ( N ) . E ˘ λ ˘ E B − ˘ µ 1 Proof. ◮ kill dependence from X n − 1 , if Z n = 0 , W ′ n = K , if Z n = − 1 , 0 , else , ◮ use properties of 2-dimensional random walks: V ′ ˘ → ˘ P E W ′ τ ( N ) E Z , a . s . N → ∞ ˘ N 16 / 26
Recommend
More recommend