discrete event systems and generalized semi markov
play

Discrete-Event Systems and Generalized Semi-Markov Processes - PowerPoint PPT Presentation

Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Stochastic Systems Reading: Section 1.4 in Shedler or Section 4.1 in Haas The GSMP Model Simulating GSMPs


  1. Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Systems and Generalized Semi-Markov Processes Discrete-Event Stochastic Systems Reading: Section 1.4 in Shedler or Section 4.1 in Haas The GSMP Model Simulating GSMPs Peter J. Haas Generating Clock Readings: Inversion Method Markovian and Semi-Markovian GSMPs CS 590M: Simulation Spring Semester 2020 1 / 27 2 / 27 Discrete-Event Stochastic Systems GSMP Overview ◮ Events associated with a state “compete” to trigger next Stochastic state transitions occur at an increasing sequence state transition of random times ◮ Each event has own distribution for determining the next state ◮ New events ◮ Associated with new state but not old state, or ◮ Associated with new state and just triggered state transition X(t) ◮ Clock is set with time until event occurs (runs down to 0) ◮ Old events t ◮ Associated with old and new states, did not trigger transition ◮ Clock continues to run down ◮ Canceled events � � How to model underlying process X ( t ) : t ≥ 0 ? ◮ Associated with old state, but not new state ◮ Generalized semi-Markov processes (GSMPs) ◮ Clock reading is discarded ◮ Basic model of a discrete-event system ◮ Clocks can run down at state-dependent speeds 3 / 27 4 / 27

  2. Clock-Reading Plot GSMP Building Blocks ◮ S : a (finite or countably infinite) set of states ◮ E = { e 1 , e 2 , . . . , e M } : a finite set of events ◮ E ( s ) ⊆ E : the set of active events in state s ∈ S ◮ p ( s ′ ; s , E ∗ ): probability that new state = s ′ when events in E ∗ simultaneously occur in s ◮ Write p ( s ′ ; s , e ∗ ) if E ∗ = { e ∗ } (unique trigger event) ◮ r ( s , e ): the nonnegative finite speed at which clock for e runs down in state s ◮ Typically r ( s , e ) = 1 ◮ Set r ( s , e ) = 0 to model “preempt resume” service discipline ◮ F ( · ; s ′ , e ′ , s , E ∗ ): cdf of new clock-reading for e ′ after state E ∗ → s ′ transition s − ◮ µ : initial distribution for state and clock readings D D ◮ Assume initial state s ∼ ν and clock readings ∼ F 0 ( · ; e , s ) 5 / 27 6 / 27 New and Old Events Example: GI/G/1 Queue - Assume that interarrival-time dist’n F a and service-time dist’n F s are continuous (no simult. event occurrences) - Assume that at time t = 0 a job arrives to an empty system X ( t ) = # of jobs in service or waiting in queue at time t Can define ( X ( t ) : t ≥ 0) as a GSMP: ◮ S = ◮ E = ◮ E ( s ) = ◮ p : ◮ F ( x ; s ′ , e ′ , s , e ∗ ) : ◮ r ( s , e ) = ◮ Initial dist’n: 7 / 27 8 / 27

  3. A More Complex Example: Patrolling Repairman GSMPs and GSSMCs See handout for details ◮ Provides an example of how to concisely express GSMP building blocks � � GSMP formally defined in terms of GSSMC ( S n , C n ) : n ≥ 0 Specifying a GSMP can be complex and time-consuming, so ◮ S n = state just after n th transition why do it? ◮ C n = ( C n , 1 , C n , 2 , . . . , C n , M ) = clock readings just after n th ◮ Direct guidance for coding (helps catch “corner cases”) transition ◮ Communicates model at high level (vs poring through code) ◮ See Haas or Shedler books for definition of P � � ( s , c ) , A and µ ◮ Theory for GSMPs can help in establishing important properties of the simulation ◮ Stability (i.e., convergence to steady state), so that steady-state estimation problems are well defined ◮ Validity of specific simulation output-analysis methods, so that estimates are correct 9 / 27 10 / 27 GSMP Definition GSMP Definition in a Picture N t ( ) = 2 S 2 Define N t ( ) = 3 ◮ Holding time: t ∗ ( s , c ) = min { i : e i ∈ E ( s ) } c i / r ( s , e i ) S 3 ◮ n th state-transition time: ζ n = � n − 1 k =0 t ∗ ( s , c ) N t ( ) = 0 S 0 ◮ # of state transitions in [0 , t ]: N ( t ) = max { n ≥ 0 : ζ n ≤ t } N t ( ) = 1 S 1 Let ∆ �∈ S and set � t S *( , C ) t S *( , C ) t S *( , C ) t S *( , C ) if N ( t ) < ∞ ; S N ( t ) 0 0 1 1 2 2 3 3 X ( t ) = ∆ if N ( t ) = ∞ ζ 0 ζ 1 ζ 2 ζ 3 ζ 4 = 0 t X t ( ) = 3 S 11 / 27 12 / 27

  4. Sample Path Generation GSMP Simulation Algorithm (Variable Time-Advance) 1. (Initialization) Select s D ∼ ν . For each e i ∈ E ( s ) generate a D Discrete-Event Systems and Generalized Semi-Markov Processes clock reading c i ∼ F 0 ( · ; e i , s ). Set c i = 0 for e i / ∈ E ( s ). Discrete-Event Stochastic Systems 2. Determine holding time t ∗ ( s , c ) and set of trigger events The GSMP Model E ∗ = E ∗ ( s , c ) = { e i : c i / r ( s , e i ) = t ∗ ( s , c ) } . Simulating GSMPs 3. Generate next state s ′ D ∼ p ( · ; s , E ∗ ). Generating Clock Readings: Inversion Method D Markovian and Semi-Markovian GSMPs 4. For each e i ∈ N ( s ′ ; s , E ∗ ), generate c ′ ∼ F ( · ; s ′ , e i , s , E ∗ ). i 5. For each e i ∈ O ( s ′ ; s , E ∗ ), set c i = c i − t ∗ ( s , c ) r ( s , e i ). ′ 6. For each e i ∈ ( E ( s ) − E ∗ ) − E ( s ′ ), set c ′ i = 0 (i.e., cancel event e i ). 7. Set s = s ′ and c = c ′ , and go to Step 2. (Here c = ( c 1 , c 2 , . . . , c M ) and similarly for c ′ .) 13 / 27 14 / 27 Sample Path Generation, Continued Algorithm generates sequence of states ( S n : n ≥ 0) , clock-reading vectors ( C n : n ≥ 0) , and holding times t ∗ ( S n , C n ) : n ≥ 0 � � Discrete-Event Systems and Generalized Semi-Markov Processes Transition times ( ζ n : n ≥ 0) and continuous-time process Discrete-Event Stochastic Systems � � X ( n ) : n ≥ 0 computed as described previously The GSMP Model Simulating GSMPs � � �� Use usual techniques to estimate quantities like E f X ( t ) Generating Clock Readings: Inversion Method or even Markovian and Semi-Markovian GSMPs � t � 1 � � � α = E X ( u ) f du t 0 � � N ( t ) − 1 �� 1 � f ( S n ) t ∗ ( S n , C n ) + f ( S N ( t ) ) � � = E t − ζ N ( t ) t n =0 Flow charts and diagrams can be helpful (see Law, p. 30–32 for an example) 15 / 27 16 / 27

  5. Generating Clock Readings: Example The Inversion Method: Special Case Exponential distribution with rate (intensity) λ � � λ e − λ x 1 − e − λ x if x ≥ 0; if x ≥ 0; Spose that cdf F ( x ) = P ( V ≤ x ) is increasing and continuous f ( x ; λ ) = and F ( x ; λ ) = 0 if x < 0 0 if x < 0 Claim: Mean = 1 /λ If U D ∼ Uniform(0 , 1) and V = F − 1 ( U ), then V D ∼ F Proof: Claim: If U D , then V D ∼ Uniform(0 , 1) and V = − ln U ∼ exp( λ ) λ Proof: 17 / 27 18 / 27 Example: Exponential Distribution The Inversion Method: General Case F ( x ) = 1 − e − λ x Generalized inverse F − 1 ( u ) = min { x : F ( x ) ≥ u } F − 1 ( u ) = F(x) u x -1 F (u) Claim still holds: F − 1 ( u ) ≤ x ⇔ u ≤ F ( x ) by definition Exercise: Show that inversion method = naive method for discrete RVs 19 / 27 20 / 27

  6. Markovian GSMPs Properties of the Exponential Distribution If X D ∼ exp( λ ) and Y D ∼ exp( µ ) then Discrete-Event Systems and Generalized Semi-Markov Processes 1. min( X , Y ) D Discrete-Event Stochastic Systems ∼ exp( λ + µ ) [indep. of whether min = X or Y ] The GSMP Model λ 2. P ( X < Y ) = λ + µ Simulating GSMPs 3. P ( X > a + b | X > a ) = e − λ b [memoryless property] Generating Clock Readings: Inversion Method Markovian and Semi-Markovian GSMPs Properties 1 and 2 generalize to multiple exponential RVs Simple GSMP event e ′ F ( · ; s ′ , e ′ , s , E ∗ ) ≡ F ( · ; e ′ ) and F 0 ( · ; e ′ ; s ) ≡ F ( · ; e ′ ) 21 / 27 22 / 27 Markovian GSMPs, Continued Markovian GSMPs, Continued Suppose that all events in a GSMP are simple with Structure of a Markovian GSMP exponential clock-setting distn’s ◮ Sequence ( S n : n ≥ 0) is a DTMC with transition matrix R ( s , s ′ ) = � e i ∈ E ( s ) p ( s ′ ; s , e i )( λ i /λ ) Key observation: By memoryless property, whenever GSMP ◮ Given ( S n : n ≥ 0), holding times are mutually independent jumps into a state s , clock readings for events in E ( s ) are D � � mutually independent and exponentially distributed with holding time in S n ∼ exp λ ( S n ) Often, occurrence of e i in s causes state to change to a Simplified Simulation Algorithm (No clock readings needed) unique state y i = y i ( s ) with probability 1 1. (Initialization) Select s D ∼ ν Super-Simplified Simulation Algorithm 2. Generate holding time t ∗ D ∼ exp( λ ), where 1. (Initialization) Select s D ∼ ν λ = λ ( s ) = � e i ∈ E ( s ) λ i 2. Generate holding time t ∗ D ∼ exp( λ ), where λ = � e i ∈ E ( s ) λ i 3. Select e i ∈ E ( s ) as trigger event with probability λ i /λ 3. Set s ′ = y i ( s ) with probability λ i /λ 4. Generate the next state s ′ D ∼ p ( · ; s , e i ) 4. Set s = s ′ and go to Step 2 5. Set s = s ′ and go to Step 2 23 / 27 24 / 27

Recommend


More recommend