CPSC 531: System Modeling and Simulation Carey Williamson Department of Computer Science University of Calgary Fall 2017
Recap and Terminology ▪ (Pseudo-) Random Number ▪ Random Variate Generation Generation (RNG) (RVG) ▪ A fundamental primitive ▪ Builds upon Uniform(0,1) required for simulations ▪ Goal: any distribution ▪ Goal: Uniform(0,1) ▪ Discrete distributions ▪ Uniformity ▪ Continuous distributions ▪ Independence ▪ Independence (usually) ▪ Computational efficiency ▪ Correlation (if desired) ▪ Long period ▪ Computational efficiency ▪ Multiple streams ▪ Common approach: the ▪ Common approach: LCG inverse transform method ▪ Careful design and seeding ▪ Straightforward math (usually) ▪ Never generates 0.0 or 1.0 ▪ Might generate 0.0 or 1.0 ▪ Covered in guest lecture (JH) ▪ Covered in today’s lecture ▪ Readings: 2.1, 2.2 ▪ Readings: 6.1, 6.2 2
Outline ▪ Random variate generation — Inverse transform method — Convolution method — Empirical distribution — Other techniques 3
Discrete-Event Simulation ▪ Input parameters such as inter-arrival times and service times are often modeled by random variables with some given distributions ▪ A mechanism is needed to generate variates for a wide class of distributions This can be done using a sequence of random numbers that are independent of each other and are uniformly distributed between 0 and 1 4
Uniform Random Numbers ▪ Uniformly distributed between 0 and 1 — Consider a sequence of random numbers u 1 ,u 2 ,…, u N n equal sub-intervals • • • 0 1 — Uniformity: expected number of random numbers in each sub-interval is N/n — Independence: value of each random number is not affected by any other numbers 5
Bernoulli Variate A Bernoulli variate is useful for generating a binary outcome (0 or 1) to represent “success” (1) or “failure” (0) Example: wireless network packet transmission Example : coin flipping to produce “heads” or “tails” Bernoulli trial (with parameter p ) p(0) = 1 – p p(1) = p, ▪ Random variate generation — Generate 𝑣 — If 0 < 𝑣 ≤ 𝑞 , 𝑦 = 1 ; — Otherwise 𝑦 = 0 6
Inverse Transformation Method: Discrete Distributions ▪ Consider a tri-modal discrete distribution — Example: size of an email message (in paragraphs, or KB) — Example: p(1) = 0.5, p(2) = 0.3, p(3) = 0.2 ▪ Cumulative distribution function, F(x) 1.0 0.8 F(x) 0.5 0 1 2 3 x 7
Inverse Transformation Method: Discrete Distributions ▪ Algorithm — Generate random number 𝑣 — Random variate 𝑦 = 𝑗 if 𝐺 𝑗 − 1 < 𝑣 ≤ 𝐺(𝑗) ▪ Example: F(0) = 0, F(1) = 0.5, F(2) = 0.8, F(3) = 1.0 — 0 < u ≤ 0.5 variate 𝑦 = 1 — 0.5 < u ≤ 0.8 variate 𝑦 = 2 — 0.8 < u ≤ 1.0 variate 𝑦 = 3 8
Discrete Uniform Variate ▪ Discrete uniform (with parameters a and b ) p(n) = 1/(b – a + 1) for n = a, a + 1, …, b F(n) = (n – a + 1)/(b – a + 1) ▪ Random variate generation — Generate 𝑣 OR — 𝑦 = 𝑏 + 𝑔𝑚𝑝𝑝𝑠(𝑣 ∗ 𝑐 − 𝑏 + 1 ) — 𝑦 = 𝑏 − 1 + 𝑑𝑓𝑗𝑚𝑗𝑜(𝑣 ∗ 𝑐 − 𝑏 + 1 ) 9
Geometric Variate ▪ Geometric (with parameter p ) 𝑞 𝑜 = 𝑞 1 − 𝑞 𝑜−1 , n = 1,2,3, … ▪ Gives the number of Bernoulli trials until achieving the first success ▪ Random variate generation — Generate 𝑣 ln 𝑣 — Geometric variate 𝑦 = ln 1−𝑞 10
Inverse Transformation Method: Continuous Distributions ▪ Algorithm — Generate uniform random number 𝑣 — Solve 𝐺 𝑦 = 𝑣 for random variate 𝑦 1 F(x) u F(x) : Cumulative Distribution Function of X = ℙ(𝑌 ≤ 𝑦) 0 Variate 𝑦 x 11
Proof ▪ Define the random variable 𝑍 as: 𝑍 = 𝐺(𝑌) 1 𝐺(𝑦) y x ℙ 𝑍 ≤ 𝑧 = ℙ 𝑌 ≤ 𝑦 = 𝑧 Therefore, 𝑍~𝑉(0, 1) 12
Continuous Uniform Variate ▪ Uniform (with parameters a and b ) 1/( b a ) a x b , f x ( ) 0 otherwise. F(x) = (x – a)/(b – a), 𝑏 ≤ 𝑦 ≤ 𝑐 ▪ Random variate generation — Generate 𝑣 — 𝑦 = 𝑏 + 𝑐 − 𝑏 𝑣 13
Exponential Variate ▪ Exponential (with parameter 𝜇 ) f (x) = 𝜇 e - 𝜇 x F(x) = 1 – e - 𝜇 x ▪ Random variate generation — Generate 𝑣 1 — 𝑦 = − 𝜇 ⋅ ln(𝑣) 1 ▪ Can also use 𝑦 = − 𝜇 ⋅ ln(1 − 𝑣) Note: If 𝑣 is Uniform(0,1), then 1 − 𝑣 is Uniform(0,1) too! 14
Convolution Method ▪ Sum of n variables: 𝑦 = 𝑧 1 + 𝑧 2 + ⋯ + 𝑧 𝑜 1. Generate n random variate 𝑧 𝑗 's 2. The random variate 𝑦 is given by the sum of 𝑧 𝑗 ’s Example: the sum of two fair dice that are rolled P(x=2) = 1/36; P(x=3) = 2/36; P(x=4) = 3/36; P(x=5) = 4/36; P(x=6) = 5/36; P(x=7) = 6/36; P(x=8) = 5/36; P(x=9) = 4/36; P(x=10) = 3/36; P(x=11) = 2/36; P(x=12) = 1/36 15
Geometric Variate ▪ Geometric (with parameter p ) 𝑞 𝑜 = 𝑞 1 − 𝑞 𝑜−1 , n = 1,2,3, … ▪ Gives the number of Bernoulli trials until achieving the first success — let 𝑐 = 0 , 𝑜 = 0 — while ( 𝑐 == 0 ) ▪ Generate Bernoulli variate 𝑐 with parameter 𝑞 ▪ Geometric variate 𝑜 = 𝑜 + 1 Inefficient!! 16
Binomial Variate ▪ Binomial (with parameters p and 𝑜 ) 𝑞 𝑙 = ℙ(𝑌 = 𝑙) = 𝑜 𝑙 𝑞 𝑙 1 − 𝑞 𝑜−𝑙 , 𝑙 = 0,1, … , 𝑜 Random variate generation — Generate 𝑜 Bernoulli variates, 𝑧 1 , 𝑧 2 , … , 𝑧 𝑜 — Binomial variate 𝑦 = 𝑧 1 + 𝑧 2 + ⋯ + 𝑧 𝑜 17
Poisson Variate ▪ Poisson (with parameter 𝜇 ) 𝜇 𝑙 𝑙! 𝑓 −𝜇 , 𝑙 = 0,1,2, … 𝑞 𝑙 = ℙ 𝑌 = 𝑙 = ▪ Random variate generation (based on the relationship with exponential distribution) — let 𝑡 = 0 , 𝑜 = 0 — while ( 𝑡 ≤ 1 ) ▪ Generate exponential variate y with parameter 𝜇 ▪ 𝑡 = 𝑡 + 𝑧 ▪ 𝑜 = 𝑜 + 1 — Poisson variate 𝑦 = 𝑜 − 1 18
Other Techniques: Normal Variate ▪ Normal (with parameters 𝜈 and 𝜏 2 ) 2 𝜏 2𝜌 𝑓 − 1 𝑦−𝜈 1 𝑔 𝑦 = , for −∞ ≤ 𝑦 ≤ +∞ 2 𝜏 ▪ Random variate generation using approximation method — Generate two random numbers u 1 and u 2 — Random variates 𝑦 1 and 𝑦 2 are given by: 𝑦 1 = 𝜈 + 𝜏 −2 ln(𝑣 1 ) ⋅ cos(2𝜌𝑣 2 ) 𝑦 2 = 𝜈 + 𝜏 −2 ln(𝑣 1 ) ⋅ sin(2𝜌𝑣 2 ) 19
Empirical Distribution Could be used if no theoretical distributions fit the data adequately ▪ Example: Piecewise Linear empirical distribution 1 — Used for continuous data 0.9 0.8 — Appropriate when a large 0.7 sample data is available 0.6 0.5 — Empirical CDF is approximated 0.4 by a piecewise linear function: 0.3 0.2 ▪ the ‘jump points’ connected 0.1 by linear functions 0 Piecewise Linear Empirical CDF 20
Empirical Distribution ▪ Piecewise Linear empirical distribution — Organize 𝑌 -axis into 𝐿 intervals — Interval 𝑗 is from 𝑏 𝑗−1 to 𝑏 𝑗 for 𝑗 = 1,2, … , 𝐿 — 𝑞 𝑗 : relative frequency of interval 𝑗 — 𝑑 𝑗 : relative cumulative frequency of interval 𝑗 , i.e., 𝑑 𝑗 = 𝑞 1 + ⋯ + 𝑞 𝑗 interval 𝑗 • • • 𝑏 𝑗−1 𝑏 𝑗 𝑏 0 𝑏 𝐿 — Empirical CDF: 𝐿 intervals ▪ If 𝑦 is in interval 𝑗 , i.e., 𝑏 𝑗−1 < 𝑦 ≤ 𝑏 𝑗 , then: 𝐺 𝑦 = 𝑑 𝑗−1 + 𝛽 𝑗 𝑦 − 𝑏 𝑗−1 where, slope 𝛽 𝑗 is given by 𝛽 𝑗 = 𝑑 𝑗 − 𝑑 𝑗−1 𝑏 𝑗 − 𝑏 𝑗−1 21
Example Empirical Distribution ▪ Suppose the data collected for 100 broken machine repair times are: Interval Relative Cumulative i (Hours) Frequency Frequency Frequency Slope 0.0 < x ≤ 0.5 1 31 0.31 0.31 0.62 0.5 < x ≤ 1.0 2 10 0.10 0.41 0.2 1.0 < x ≤ 1.5 3 25 0.25 0.66 0.5 1.5 < x ≤ 2.0 4 34 0.34 1.00 0.68 1 Piecewise Linear 0.9 Empirical CDF 0.8 0.7 0.6 𝑑 3 −𝑑 2 Slope 𝛽 3 = 𝑏 3 −𝑏 2 = 0.5 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 𝑏 0 𝑏 1 𝑏 2 𝑏 3 𝑏 4 22
Empirical Distribution ▪ Random variate generation: — Generate random number 𝑣 — Select the appropriate interval 𝑗 such that 𝑑 𝑗−1 < 𝑣 ≤ 𝑑 𝑗 — Use the inverse transformation method to compute the random variate 𝑦 as follows 1 𝑦 = 𝑏 𝑗−1 + 𝛽 𝑗 (𝑣 − 𝑑 𝑗−1 ) 23
Example Empirical Distribution ▪ Suppose the data collected for 100 broken machine repair times are: Interval Relative Cumulative i (Hours) Frequency Frequency Frequency Slope 0.25 < x ≤ 0.5 1 31 0.31 0.31 1.24 0.5 < x ≤ 1.0 2 10 0.10 0.41 0.2 1.0 < x ≤ 1.5 3 25 0.25 0.66 0.5 1.5 < x ≤ 2.0 4 34 0.34 1.00 0.68 ▪ Suppose: 𝑣 = 0.83 𝑑 3 = 0.66 < 𝑣 ≤ 𝑑 4 = 1.00 ⇒ 𝑗 = 4 𝑦 = 𝑏 3 + 1 𝑣 − 𝑑 3 𝛽 4 1 = 1.5 + 0.83 − 0.66 0.68 = 1.75 24
Recommend
More recommend