introduction to simulation of telecommunication networks
play

Introduction to Simulation of Telecommunication Networks - PowerPoint PPT Presentation

Politecnico di Milano Department of Electronics, Information and Bioengineering (DEIB) Introduction to Simulation of Telecommunication Networks 4hr-Seminar in the course Switching and Routing Massimo Tornatore (Courtesy of Prof. Fabio


  1. Classification of Dynamic Simulation  Discrete-event dynamic simulation  The system state changes in response to «events»  E.g., telecom network simulation  Continuous-time dynamic simulation  The system state evolves in response to the change of continuous-time variable  E.g., weather forecast 20 Introduction to Simulation

  2. Simulation of discrete events  Simulation of discrete events is of fundamental importance for telecommunications networks  In discrete event simulation the state variables change value only at discrete instants of time  The change of the system state is called event and is characterized by an instant of occurrence – An event has no duration  After the event occurs, in the system an activity starts that persists for some time – An activity is usually characterized by a start event and an end event – For example, the beginning and the end of the transmission of a packet are events, while the transmission itself is an activity 21 Introduction to Simulation

  3. Simulation of discrete events  In discrete event simulation we should: – define the types of events that can occur – define the changes in the system state associated to each event – define a time variability and an ordering of events in a calendar based on the instant of event occurrence – define an initial state – scroll through the calendar and, each time an event occurs, make changes to state variables according to that event – measure on the output variables 22 Introduction to Simulation

  4. Example: Simulation of a queue system  Model: – Queue system with a server and an infinite queue – Input variables: – interarrival times of requests (packets) – service times of requests – State variables: – number of requests in the system – Initial state – e.g., no user in the system – Output variables: – average time spent by a packet in the system 23 Introduction to Simulation

  5. Example: Simulation of a queue system  Events – 1. first arrival - How do I schedule the «start»? – Set service start Initialization (“ init ”) - – Set service end – 2. from the second arrival on – In this case we should act differently on the basis of the state: – Arrival » in an empty system → immediate service start » in a not-empty system → add a packet in the queue – End (? See next slides) » with empty queue » with non-empty queue 24 Introduction to Simulation

  6. Example: simulation of a queue system  Filling the calendar of events – PROBLEM: it is not possible to place the end service events of each request without knowing the system state – SOLUTION: the calendar can be filled with new events while other events are pending – Example: a packet is queued. When the transmission start time is reached, then the end time is scheduled 25 Introduction to Simulation

  7. Example: a queue system simulation  In summary:  when we have an «arrival» event, we increase the number of users, then  if the system is empty, a new event is inserted in the calendar at a time equal to «CLOCK + service time»  if the system is busy, add a packet in the queue  when an service-end event is reached, we decrease the number of users, then  If the queue is empty, no action  if the queue is not empty, a new end event is inserted in the calendar at a time equal to the value of «CLOCK + service time» 26 Introduction to Simulation

  8. Example: a queue system simulation  Measurement of output variables 1. Transfer time – User arrival: storage time of arrival – End of service: calculation of the service time (CLOCK - t_arrival) 2. Average users in the queue – Weigthed average of the users in the systems during each activity interval («time slices») 27 Introduction to Simulation

  9. Simulation of discrete events  Note: the correct inclusion of a new event in the calendar is a critical operation if the calendar has many events – Efficient techniques must be used for inclusion of an element in an ordered list 28 Introduction to Simulation

  10. The variable Clock  Sliding of the «CLOCK» variable – «clock driven» simulations – the CLOCK variable is always increased of a fixed step » E.g., Slotted systems – «event driven» simulations – the CLOCK variable is increased according to the time interval between the occurrence of an event and the occurence of the following event  Note: – from a computational-time point of view it may be convenient to adopt a method rather than the other depending on the model 29 Introduction to Simulation

  11. Some final considerations …  Considering the power/efficiency of modern coding languages and computing systems, simulation is today a powerful tool of analysis to address complex problems  But simulation is also a tool that should be used with care for the following reasons: – is not easy to validate the obtained results – the computational time can easily get very high – is not easy to understand how different parameters affect the result 30 Introduction to Simulation

  12. Simulation of discrete events  The simulation of a stochastic model involves the utilization of random input variables → We need statistical distribution of input variables  So, for computer simulation, pseudo-random number generation and synthesis of statistical variables is needed (the next topic ...) – Example: traffic entering a queue system described by the process of arrivals and the process of service times 31 Introduction to Simulation

  13. Summary  What is simulation?  Systems, models and variables  Discrete-event simulation  Generation of pseudo-random numbers – Synthesis of random variables  Statistical Analysis – Statistical confidence of simulative results 32 Introduction to Simulation

  14. The Role of Random numbers  When the model to be analyzed via simulation is stochastic, two important problems arise:  the generation of pseudo-random numbers to be use for the generation of input variables  the statistical analysis of the results obtained through the output variables 33 Introduction to Simulation

  15. What I assume you know  Basic statistic used in the following:  Average (mean), variance  Concept of random variable  Probability density function (pdf) f(x)  Cumulative distribution function ( CDF ) F(x)  Theorem of central limit  Gaussian (normal), t-student distributions 34 M. Tornatore: Introduction to Simulation

  16. Generation of pseudo-random numbers  Rigorously speaking, numbers generated by a computer can not be random due to the deterministic nature of a computer  We can however generate pseudo-random sequences that meet a series of statistical tests of randomness 35 Introduction to Simulation

  17. Generation of pseudo-random numbers  The problem of generating pseudo-random numbers can be logically divided in two parts:  generation of sequences of random numbers uniformly distributed between 0 and 1  generation of sequences of random numbers distributed in an arbitrary mode – Poisson, Bernoulli, Weibull, Exponential, etc ... 36 Introduction to Simulation

  18. Generation of pseudo-random numbers  The pseudo-random sequences are obtained through the implementation of recursive formulas  Some history  The first method to generate random sequences was Von Neumann's «square center» method  The next number is obtained by squaring the previous number and taking the central number 37 Introduction to Simulation

  19. Von Neumann’s example x 0 =3456 that squared provides 2 =11943936 (x 0 ) so x 1 = 9439 Sequence of random number obtained This method was abandoned: difficult to analyze, relatively slow and statistically unsatisfactory 38 Introduction to Simulation

  20. Generation of pseudo-random numbers  Factors determining the quality of a method: numbers must be uniformly distributed (i.e., they 1) must have the same probability to occur) numbers must be statistically independent 2) we must be able to re-produce the sequence 3) sequence must be of an arbitrary length 4) the method should be quickly executable by the 5) computer and must consume small amount of memory 39 Introduction to Simulation

  21. Generation of pseudo-random numbers  Let’s recall some basic math operators     x mod y x y x / y  Module: – Property of module: x mod y   0 1 y  Congruency:  x y (mod z )  x mod z y mod z 40 Introduction to Simulation

  22. Generation of pseudo-random numbers  Linear Congruency Method (Lehmer 1948) { X n } n  N  X 0 , X 1 , X 2 ,..., X i ,...  That:   X aX c (mod m )  n 1 n ฀ X initial value or seed 0 a multiplier c increase m module 41 Introduction to Simulation

  23. Generation of pseudo-random numbers  Example :  X 0 = a = c = 7  m = 10  {X n } n  N = 7, 6, 9, 0,7, 6, 9,...  Note: 0  X i  m, for all i  The method is called:  multiplicative if c = 0  Mixed if c  0 42 Introduction to Simulation

  24. Generation of pseudo-random numbers  Drawbacks of linear congruency  As soon as X p = X 0 , the sequence is repeated periodically; p is the period of the sequence  Being X n <m , the period will be less than or equal to m  Note(1): if p = m then all numbers between 0 and m-1 are repeated once in the sequence  Note(2): to obtain a sequence in [0,1): { R n } n  N  X 0 m , X 1 m , X 2 m ,..., X i m ,... 43 Introduction to Simulation ฀

  25. Generation of pseudo-random numbers  We can relate directly X n to X 0  This emphasizes even more the deterministic nature of the sequence!   X ( aX c ) mod m 1 0      2 X ( aX c ) mod m ( a X ( 1 a ) c ) mod m 2 1 0  3 1 a   3 X ( a X c ) mod m  3 0 a 1 ...  n a 1   n X ( a X c ) mod m  n 0 a 1 44 Introduction to Simulation

  26. Generation of pseudo-random numbers  How to set the multiplier a and increase c :  a and c strongly influence the period and the statistical properties of the sequence  there are rules for choosing a and c that will return periods p = m (full period)  Criteria to ensure optimality: The parameters c and m must be co-prime, i.e.: 1. MCD(c,m) = 1 Every prime divisor of m must divide (a-1) 2. Ex: if m=10, its prime factors are 2 and 5. (a-1) must be a multiple of 2 and 5 If m is a multiple of 4, also (a-1) must be 3. 45 Introduction to Simulation

  27. Generation of pseudo-random numbers  It is not easy to find values that satisfy (1), (2), (3)  Ex: m=10, a=21, c=3 (X n =3,6,9,2,5,8,1,4,7,0,..)  Some researchers have therefore identified the following values in accordance with these criteria:  KNUTH m = 2 31 ; a = int ( p * 10 8 ) ; c = 453806245  GOODMAN/MILLER m = 2 31 -1; a = 7 5 ; c = 0  GORDON m = 2 31 ; a = 5 13 ; c = 0  LEORMONT/LEWIS m = 2 31 ; a = 2 16 + 3 ; c = 0 46 Introduction to Simulation

  28. Generation of pseudo-random numbers  A simpler condition: – if the method is multiplicative ( c =0) you can show that if m=2 b then the maximum period is p=2 b-2 if b  4  Note: the equivalence of multiplicative and mixed approaches has been proven 47 Introduction to Simulation

  29. Generation of pseudo-random numbers  Notes on the choice of module m  m influences the period because p  m  m also affects the computational speed: – to calculate a module, we should generally perform a product, a sum and a division – it is possible to do everything together if you choose as module the maximum integer representable by the computer plus one – if b is the number of bits used by the computer, you will choose m=2 b – (e.g., 2 31 for registers with 31 bit) – in this module, the operation correspond to a truncation 48 Introduction to Simulation

  30. Generation of pseudo-random numbers  Other methods:  congruent square method: – It is based on the generation of congruent numbers with m- module according to the relation:    2 ( ) mod X dX aX c m  n 1 n n  Fibonacci or additive method:   X ( X X ) mod m   n 1 n n k 49 Introduction to Simulation

  31. Generation of pseudo-random numbers Notes on Test for generators:  The tests on pseudo-random numbers generators are applied to verify that:  Generated numbers are uniformly distributed  Generated numbers are indipendent  However, classic tests for statistical hypothesis are designed for random variables and must find their implementation in a test run on a finite set of samples  generally we assume an hypothesis as verified only if the set of samples satisfies a certain number of tests 50 Introduction to Simulation

  32. Generation of pseudo-random numbers Notes on Test for generators:  a typical test to verify a certain statistical distribution is the test of c 2  We divide the set of possible values in k categories  Ŷ i is the set of sample values in the i-th category and Y i =np i is the expected value, where n is the number of samples and p i is probability of category i ( p i =1/k for uniform case)  a quality index can be defined as ˆ ˆ ˆ    2 2 2 ( Y Y ) ( Y Y ) ( Y Y )     1 1 2 2 k k V ... Y Y Y 1 2 k 51 Introduction to Simulation

  33. Generation of pseudo-random numbers Notes on Test for generators: c 2 test  the problem is that the value of V is itself a random variable which also depends on the absolute values  therefore is necessary to repeat the test several times on different samples and evaluate the probability that V takes high values  it can be proven that V has a c 2 distribution with n = k-1 degrees of freedom: n x 1   1   f ( x ) x 2 e 2 ; x 0   n     n / 2 2   2     n n n        integer : 1 !     2 2 2 52 Introduction to Simulation

  34. Generation of pseudo-random numbers __________ __ ______ ______ _______ Notes on Test for generators: c 2 test  If P x indicates the percentile x% of the distribution c 2 , we can rank the observations of V according to table:  P 0 -P 1 , P 99 -P 100 reject  P 1 -P 5 , P 95 -P 99 suspicios  P 5 -P 10 , P 90 -P 95 almost-suspicios 53 Introduction to Simulation

  35. Generation of pseudo-random numbers __________ __ ______ ______ _______ Notes on Test for generators: gap test  There are several tests to verify the independence of the samples  A frequently used test is the «gap» test – we define an event on the observed distribution, such as passing a certain threshold – we estimate the probability p associated to the event – starting from the sequence of samples we derive the sequence of variables (0,1) that defines if the event occurred or not 54 Introduction to Simulation

  36. Generation of pseudo-random numbers __________ __ ______ ______ _______ Notes on Test for generators: gap test – Let us consider the length of the sequences of 0 and the sequences of 1 – since the distribution of these lengths is geometric, we verify the congruency to the distribution using a test (e.g., the c 2 ) – or, more simply, we estimate the average value and compare it with 1-p and p, respectively 000111110010010010010011100001001 55 Introduction to Simulation

  37. Generation of pseudo-random numbers  Now we have:  A sequence of pseudo-random numbers – Uniformly distributed between 0 and 1 – That satisfies test of randomness  What do we have to do?  We use them to obtain samples of variables distributed according to the distribution we need (exponential, Poisson, geometric, etc. ..) 56 Introduction to Simulation

  38. Generation of other distributions  «Inverse transform» method  Given – r : variable uniformly distribuited in [0,1] – that is, f(r)=1 or F(r)=r  To obtain a random variable x with f(x) , we have to: – Determine F(x), 0<=F(x)<=1 – Generate random samples r – Set r=F(x) – Calculate the inverse function F(x) => F -1 (.) – Obtain x= F -1 (r)  It can be proven, but we skip the proof! 57 Introduction to Simulation

  39. Generation of other distributions  Inverse transformation example (1): – We want to get x such that f(x) =1/(b-a) – From uniform [0,1] to uniform [a,b] – F(x)=(x-a)/(b-a), 0  F(x)  1, a  x  b – r=F(x)=(x-a)/(b-a) – x=r(b-a)+a 58 Introduction to Simulation

  40. Generation of other distributions  Inverse transformation example (2): – I want to get x such that f(x) =  e -  x ( x≥0) – From uniform [0,1] to negative exponential with average  – F(x)= 1 e -  x – r=F(x)= 1 e -  x – x= (-ln(1-r))/  59 Introduction to Simulation

  41. Generation of other distributions  We could show it more rigorously … – Next 4 slides 60 Introduction to Simulation

  42. Generation of other distributions  Generation of an arbitrary distribution:  Elements of probability : The fundamental theorem of functions of random variables:  p.d.f. f ( y ) of r.v. Y g(X) is given by : Y ( ) f x   X i ( ) f y Y ' ( ) g x i i where x are the solutions of the equation : i  y g(x)  that are in turn function of y , x x (y) i i 61 Introduction to Simulation

  43. Generation of other distributions  Example: X is a r.v. with C.D.F F (x) . X  Consider Y F (X) X The fundamenta l theorem allows to write : ' ( ) F x     X 1 f ( y ) 1 0 y 1 Y F ' ( x ) X 1 where x is the only solution of the equation 1    y F (x) that exists only if 0 y 1. X So x is uniform in (0,1)! 62 Introduction to Simulation

  44. Generation of other distributions  Synthesis of a r.v. using the method of percentile:  it is now easy to see that if you have: – U r.v. uniform in (0,1) – To obtain a r.v. X with CDF euqal to F(.) it is enough to set: 1 U   X F ( ) X U X F -1 (.) F(.) 63 Introduction to Simulation

  45. Generation of other distributions  This can be proven otherwise as:  U is a r.v. uniform in (0,1)   0 x 0     1 for 0 x 1       f U ( x ) F ( x ) x 0 x 1 U  0 elsewhere    1 x 1  Set: 1 U   X F ( )  It results:    P U  F ( t )    P F    F ( t )  1 ( U )  t F X ( t )  P X  t 64 Introduction to Simulation ฀

  46. Generation of other distributions  The variable U is obtained from the generation of pseudo random number -1 (.) for  It remains the problem of finding F r the variable that you want to synthesize -1 (.) cannot be  for some processes the F r obtained in analytical form and therefore we must resort to other methods  moreover, for discrete random variable we need to slightly modify the approach 65 Introduction to Simulation

  47. Generation of other distributions  Example: Exponential – If you want to generate an exponential random variable x with average  > 0 pdf is : 1    -x/ f (x) e x 0  X you have :    -x/ F (x) 1 -e x 0 X and therefore :     X - ln ( 1 -U ) or X - ln U 66 Introduction to Simulation

  48. Generation of other distributions  Example: Rayleigh – if you want to generate a random variable with Rayleigh pdf pdf is :  2  -x / 2 f (x) xe x 0 X you have :  2  2 -x / F (x) 1 -e x 0 X and therefore  X - 2 ln ( 1 -U ) or - 2 ln U 67 Introduction to Simulation

  49. Generation of other distributions  Example: Gaussian – If you want to generate a random variable x with gaussian pdf and with  =0 and s =1 – To have a variable with average  and variance s 2 it is enough to use the transformation z = s x +  – pdf is 1  2 -x / 2 f (x) e X p 2 – It is well known that the CDF of the Gaussian cannot be expressed directly, so it can not be inverted explicitly 68 F. Martignon: Introduction to Simulation

  50. Generation of other distributions  Example: Gaussian – first approach is to use an approximation – the central limit theorem tells us that the sum of N r.v.’s tends to the normal distribution with the increase of N – Usually for N  12 we assumewe can get a good approximation – So it is enough to extract 12 variables uniform U i 12    X U 6 i  i 1 69 Introduction to Simulation

  51. Generation of other distributions  Example: Gaussian – a smarter approach gets two independent samples of normal random variable with only two extractions – is based on the observation that: – a 2-dim vector which has Gaussian and Independent Cartesian components, has: » module with Rayleigh’s distribution » uniform phase in (0,2 p ) 70 Introduction to Simulation

  52. Generation of other distributions  Example: Gaussian – therefore: – two variables are extracted: U 1 ed U 2 , uniform in (0,1) – assessing X ed Y:  p X - 2 ln U cos( 2 U ) 1 2  p Y - 2 ln U sin( 2 U ) 1 2 – that are independent normal random variables 71 Introduction to Simulation

  53. Generation of other distributions  Discrete random variables:  Consider a discrete random variable described by probability distribution :      P X a p k 1 ,..., m . k k  F X (x) is a function as : p k a k 72 Introduction to Simulation

  54. Generation of other distributions  Discrete random variables  That, when inverted, becomes  The relation expressing the variable is therefore : p k a k  set x a only if : k        p ... p u p ... p p 1 k - 1 1 k - 1 k 1 NB: « u» is what we use to call «r», i.e., the pseudo-random number between 0 and1 73 Introduction to Simulation

  55. Generation of other distributions  Discrete random variables:  Example:  Generate a random variable X that takes the value 1 with probability p and value 0 with probability 1-p  It is enough to set:    1 if 0 u p   X    0 if p u 1 74 Introduction to Simulation

  56. Generation of other distributions  Discrete random variables:  it becomes extremely complicated with discrete distributions with m infinite  we must stop at a finite value m  m determines the number of comparisons that must be done in the routine assignment, and thus the speed of the routine itself  in some cases it is possible to adopt some tricks 75 Introduction to Simulation

  57. Generation of other distributions  Example: Geometric Distribution  We have:        k 1 P x k p ( 1 p ) k 1,2 ....  Consider an exponential r.v. Z; we have:                     ( n 1 ) / n / n / 1 / P n Z n 1 ( 1 e ) ( 1 e ) e ( 1 e )  this value matches P(X=n+1) if you require that: 1        1 / 1 p e ;  ln( 1 p ) 76 Introduction to Simulation

  58. Generation of other distributions  Example: Geometric Distribution  Therefore to generate a geometric variable is enough: – 1. Generate a uniform variable U in (0,1) – 2. Get an exponential variable ln U  Z  ln( 1 p ) – 3. Set  1    X Z 77 Introduction to Simulation

  59. Generation of other distributions  Example: Poisson’s Distribution  with Poisson’s distribution things get complicated, so there can be no shortcuts to the problem: k   a – 1. set k:=0, A:= e -a , p:=A    a P X k e k ! – 2. U:=rand(seed) – 3. While U>p » k:=k+1 » A:=A*a/k » p:=p+A – 4. return k 78 Introduction to Simulation

  60. Analysis and validation of the results  Once we have built the simulation model and the software that implements it, we shall:  decide what to measure (which output variables)  decide the statistical metric (average, variance?) – Note that the output variables are r.v!  repeat the experiment multiple times!!  adopt the appropriate estimators for the parameters  evaluate the accuracy (“ confidence ”) of estimation 79 Introduction to Simulation

  61. Analysis and validation of the results Estimation problem: estimation of average value  given a population whose distribution is f(x), with average E[x] = h and variance s 2 (x) = s 2  [ x 1 , x 2 , ... , x n ] are n independent observations  The average value of the samples is defined by: n 1   x x i n  1 i 80 Introduction to Simulation

  62. Analysis and validation of the results Estimation problem: estimation of average value  The average of the samples is also a r.v. with: s 2  h s  2 E [ x ] ; ( x ) n  for large n , the average of the samples is a normal variable, and then the variable: - h z  x s / n  it can be assumed normal with zero average and unitary variance based on the central limit theorem ฀ 81 Introduction to Simulation

  63. Analysis and validation of the results Estimation problem:estimation of average value  the normal distribution F(z) is tabulated  u 1a/2 is a value such that F(z) 1 1  a /2 )  1  a /2 F ( u 1- a /2  we have: -u 1- a /2 z 0 u 1- a /2    1  a P  u 1  a /2  z  u 1  a /2 f(z) ฀   1  a /2  x  h P  u  u  1  a   1  a /2  s / n u 1- a /2 82 Introduction to Simulation ฀

  64. Analysis and validation of the results Estimation problem: estimation of average value  and therefore   s  h  x  s P x  u   1  a  u 1  a /2 1  a /2  n n  the (1- a ) constant is usually expressed with percentage and is called confidence level   s s x  u x  u   ฀ , 1  a /2 1  a /2  the interval   n n  is called confidence interval 83 ฀ Introduction to Simulation

  65. Analysis and validation of the results Estimation problem: estimation of average value  commonly we adopt a confidence level of 95% for which we have: a  0.05 1  a /2  1.96 u  this means that h falls in this range:   x  1.96 s x  1.96 s ฀   ,   n n  with a probability of 95% 84 ฀ Introduction to Simulation

  66. Analysis and validation of the results Estimation problem: estimation of average value  Unfortunately, the variance s 2 is not known  s 2 should be replaced by the samples variance, defined as: n 1    2 2 s ( x x )  i n 1  i 1  In this way, however, the variable: - h t  x s/ n  is no longer normal but has t-student distribution with n-1 degrees of freedom 85 Introduction to Simulation ฀

  67. Analysis and validation of the results Estimation problem:estimation of average value  in cases with large n (>30) it is possible to approximate the t-student with the normal distribution  but for smaller values of n it is necessary to use t-student distribution with the corresponding number of degrees of freedom  Note: for Monte Carlo simulations the values of n>30 are quite common, while for temporal simulations, n is usually smaller 86 Introduction to Simulation

  68. Analysis and validation of the results Table of t-student values Warning!: b =1- a /2 k=n-1 87 Introduction to Simulation

  69. Analysis and validation of the results ______ _ _________ ___ _________ Values generation of t-student // t-distribution: given p-value and degrees of freedom, // return t-value; adapted from Peizer & Pratt JASA, vol63, p1416 double tval(double p, int df) { double t; int positive = p >= 0.5; p = (positive)? 1.0 - p : p; if (p <= 0.0 || df <= 0) t = HUGE_VAL; else if (p == 0.5) t = 0.0; else if (df == 1) t = 1.0 / tan((p + p) * 1.57079633); else if (df == 2) t = sqrt(1.0 / ((p + p) * (1.0 - p)) - 2.0); else { double ddf = df; double a = sqrt(log(1.0/(p*p))); double aa = a*a; a = a - ((2.515517+0.802853*a+0.010328*aa) / (1.0+1.432788*a+0.189269*aa+0.001308*aa*a)); t = ddf - 0.666666667 + 1.0 / (10.0 * ddf); t = sqrt(ddf*(exp(a*a*(ddf-0.833333333)/(t * t))-1.0)); } return (positive)? t : -t; } 88 Introduction to Simulation

  70. Analysis and validation of the results Operations on confidence intervals  Let’s denote the confidence intervals of two variables as:         a     a P x x x 1 ; P y y y 1 ; l u x l u y  it can be proven that:          a P Ax B Ax B Ax B 1 ; l u x          a  a P x y x y x y 1 ; l l u u x y 89 Introduction to Simulation

  71. Analysis and validation of the results On the statistical confidence of variance estimation  A direct method for evaluating the confidence of the variance estimation is using the expression s   2 2 2 ( x ) E [ x ] E [ x ]  having the populations [ x 1 , x 2 , ... , x n ] and [( x 1 ) 2 , ( x 2 ) 2 , ... , ( x n ) 2 ] it is possible estimate the confidence interval of the average of x and x 2  the two intervals can then be combined using the previous expressions 90 Introduction to Simulation

  72. Analysis and validation of the results Estimation problem:  The results seen so far are based on the fundamental assumption that: – the observed variables are stationary – the measurements are not affected by the initial state – the observations are independent  the hypothesis of independence is the more difficult to obtain and verify in practical cases  the independence of the observations depends on the characteristics of correlation of observed variables that are not known 91 Introduction to Simulation

  73. Analysis and validation of the results Estimaton problem: correlated observations  The estimator of the average continues to be a non-biased estimator  h E [ x ]  but its variance is now equal to: s       2 n 1 k  s   r   2   ( x ) 1 2 1 k     n n  k 1  where the correlation coefficient r k is:   r k  E ( x i  h )( x i  k  h ) s 2 92 Introduction to Simulation ฀

  74. Analysis and validation of the results Estimaton problem: correlated observations  The estimation of the confidence interval thus requires knowledge of the autocorrelation function of the process that is not generally known  We could use autocorrelation estimators, but the complexity and computation load would become excessive  In practice we use two different approaches to build independent sequences 93 Introduction to Simulation

  75. Analysis and validation of the results ______ _ _________ ___ _________ Method of repeated tests of correlated observations  1) repeated tests – N independent observations of the process are built repeating N times the simulation with N different random number generators – the N estimated values for each simulation are used as independent samples for the evaluation of the confidence  this approach implements in fact a generalization of the Monte Carlo simulation  It is useful in many practical situations, but in fact it is only used when the second method can not be used 94 Introduction to Simulation

  76. Analysis and validation of the results ______ _ _________ ___ _________ Method of sudvision into interval of observation  2) subdivision into intervals of observation ( run ) – simulation is divided into N blocks, each consisting of a number of observations K – evaluating the average of the output variable in each block – it is shown that with sufficiently large K the average of each block are independent – estimate the confidence interval on the basis of estimates obtained in each run  This approach is approximate  sometimes may not be easy to check that the number K of observations is the same for each run 95 Introduction to Simulation

  77. Analysis and validation of the results ______ _ _________ ___ _________ Estimaton problem: correlated observations  Example:  consider a mD/D/1 queue  m flows with deterministic inter-arrival time T are offered at a server  Service time is also deterministic and equal to S  the relative phases of the flows are random (uniform between 0 and T)  It can be shown that: – delays are periodic and depend only on the initial phases  We must repeat the experiment a number sufficiently large N of times with random phases to obtain some valid estimation of the average delay 96 Introduction to Simulation

  78. Analysis and validation of the results ______ _ _________ ___ _________ Estimaton problem: correlated observations  in some cases the measurement process is a renewal process and we can exploit the renewal process to have independent observations  a renewal process is characterized by a series of renewal instants [ b 1 , b 2 , b 3 , ...]  in these moments, the process returns to the same state  the evolution of the process in the intervals [ b n-1 , b n ] is independent from interval to interval  measurements taken on the process in distinct intervals are independent and you can apply the formulas for the estimation of confidence 97 Introduction to Simulation

  79. Analysis and validation of the results ______ _ _________ ___ _________ Estimaton problem: correlated observations  Example 1: – is easy to convince yourself that for queuing systems with general arrivals and general services, the instant of time when a new request arrives and the queue is empty this is a moment of renewal of the entire system. Indeed: – the system state is the same – a new period of inter-arrival is not started and therefore there is no memory – a new period of service is not started and therefore there is no memory – the system is empty and therefore there is no memory of users waiting 98 F. Martignon: Introduction to Simulation

  80. Analysis and validation of the results ______ _ _________ ___ _________ Estimaton problem: observations related  Example 2: – consider a M/G/1 queue system – conduct a simulation to measure the delay through the system – consider as champions the delays experienced by each user – is easy to convince yourself that these samples are related – Indeed, for example, if the first arrival finds empty system, the immediately following arrivals observe low delays, while consecutive arrivals with the system very high load observe high delay – dividing the simulation in run of the same length of time you do not control the number K i of samples for each run and long run needs to be done to have a low dispersion of the K i. 99 F. Martignon: Introduction to Simulation

  81. Analysis and validation of the results ______ _ _________ ___ _________ Estimation problem:  even assuming that we have solved the estimation problem remains that of stationarity  although the process is stationary, we are forced to start the simulation from an initial state  the initial state influence the statistics gathered in the first part of the simulation until the system reaches a stationary behavior  the simplest approach is to eliminate the results from the statistics collected during the initial interval 100 F. Martignon: Introduction to Simulation

Recommend


More recommend