ee456 digital communications
play

EE456 Digital Communications Professor Ha Nguyen September 2015 - PowerPoint PPT Presentation

Chapter 3: Probability, Random Variables, Random Processes EE456 Digital Communications Professor Ha Nguyen September 2015 EE456 Digital Communications 1 Chapter 3: Probability, Random Variables, Random Processes What is a Random


  1. Chapter 3: Probability, Random Variables, Random Processes EE456 – Digital Communications Professor Ha Nguyen September 2015 EE456 – Digital Communications 1

  2. Chapter 3: Probability, Random Variables, Random Processes What is a Random Variable? Ω R 1 2 3 4 5 6 Figure 1: An example of random variable. Sample space : Collection of all possible outcomes of a random experiment . Strictly speaking, a random variable is a mapping from the sample space to the set of real numbers. Loosely speaking, a random variable means a numerical quality that can take on different values. EE456 – Digital Communications 2

  3. Chapter 3: Probability, Random Variables, Random Processes Cumulative Distribution Function (cdf) Of course, not all random variables are the same. A random variable is completely characterized (described) by a cumulative distribution function (cdf), or a probability density function (pdf). A cdf or a pdf can be used to determine the probability (i.e., chance, level of confidence) that the a random variable will take a value in a certain range (such as negative, positive, between 1 and 2, etc.). The cdf of a random variable is defined as: F x ( x ) = Probability that x ≤ x = P ( x ≤ x ) . The cdf has the following properties: 1. 0 ≤ F x ( x ) ≤ 1 . 2. F x ( x ) is nondecreasing: F x ( x 1 ) ≤ F x ( x 2 ) if x 1 ≤ x 2 . 3. F x ( −∞ ) = 0 and F x (+ ∞ ) = 1 . 4. P ( a < x ≤ b ) = F x ( b ) − F x ( a ) . EE456 – Digital Communications 3

  4. Chapter 3: Probability, Random Variables, Random Processes Typical Plots of a cdf A random variable can be discrete , continuous or mixed . F x F x ( ) ( ) x x 1 . 0 1 . 0 (a) ( b) x x − ∞ ∞ − ∞ ∞ 0 0 F x ( ) x 1 . 0 ( c) x − ∞ ∞ 0 EE456 – Digital Communications 4

  5. Chapter 3: Probability, Random Variables, Random Processes Probability Density Function (pdf) The pdf is defined as the derivative of the cdf: f x ( x ) = d F x ( x ) . d x d F x ( ) = f ( ) x x x d x Basic properties of a pdf: 1. f x ( x ) ≥ 0 (a valid pdf must be nonnegative). � ∞ 2. f x ( x )d x = 1 (the total area −∞ under a pdf curve must be 1). x 3. 0 Π P ( x 1 ≤ x ≤ x 2 ) = P ( x ≤ x 2 ) − P ( x ≤ x 1 ) � x 2 = ∫ ∈ Π = F x ( x 2 ) − F x ( x 1 ) = f x ( x )d x. P ( ) f ( ) d x x x x 1 Π = Π Total area under ( ) over f x � 4. In general, P ( x ∈ Π) = f x ( x )d x . x Π EE456 – Digital Communications 5

  6. Chapter 3: Probability, Random Variables, Random Processes Bernoulli Random Variable f x F x ( ) ( ) x x 1 − p 1 p ( ) − p (1 ) x x 1 1 0 0 A discrete random variable that takes two values 1 and 0 with probabilities p and 1 − p . A good model for a binary data source whose output is 1 or 0. Can also be used to model the channel errors. EE456 – Digital Communications 6

  7. Chapter 3: Probability, Random Variables, Random Processes Uniform Random Variable f x F x ( ) ( ) x x 1 1 b − a x x a b a b 0 0 A continuous random variable that takes values between a and b with equal probabilities over intervals of equal length. The phase of a received sinusoidal carrier is usually modeled as a uniform random variable between 0 and 2 π . Quantization error is also typically modeled as uniform. EE456 – Digital Communications 7

  8. Chapter 3: Probability, Random Variables, Random Processes Gaussian (or Normal) Random Variable f x F x ( ) ( ) x x 1 πσ 2 2 1 1 2 x x µ µ 0 0 A continuous random variable whose pdf is: − ( x − µ ) 2 1 � � f x ( x ) = √ 2 πσ 2 exp , 2 σ 2 µ and σ 2 are parameters. Usually denoted as N ( µ, σ 2 ) . Most important and frequently encountered random variable in communications. EE456 – Digital Communications 8

  9. Chapter 3: Probability, Random Variables, Random Processes Uniform or Gaussian? 2 2 1 1 Outcome Outcome 0 0 −1 −1 −2 −2 0 50 100 0 50 100 Trial number Trial number 5 5 Outcome Outcome 0 0 −5 −5 0 50 100 0 50 100 Trial number Trial number EE456 – Digital Communications 9

  10. Chapter 3: Probability, Random Variables, Random Processes Gaussian RVs with Different Average (Mean) Values Notice the connection between the pdf and observed values 0.4 6 4 0.3 Outcome 2 f x ( x ) 0.2 0 −2 0.1 −4 0 −6 −5 0 5 0 20 40 60 80 100 x Trial number 0.4 6 4 0.3 Outcome 2 f x ( x ) 0.2 0 −2 0.1 −4 0 −6 −5 0 5 0 20 40 60 80 100 x Trial number EE456 – Digital Communications 10

  11. Chapter 3: Probability, Random Variables, Random Processes Gaussian RVs with Different Average (Mean) Squared Values Notice the connection between the pdf and observed values 0.4 6 4 0.3 Outcome 2 f x ( x ) 0.2 0 −2 0.1 −4 0 −6 −5 0 5 0 20 40 60 80 100 x Trial number 0.4 6 4 0.3 Outcome 2 f x ( x ) 0.2 0 −2 0.1 −4 0 −6 −5 0 5 0 20 40 60 80 100 x Trial number EE456 – Digital Communications 11

  12. Chapter 3: Probability, Random Variables, Random Processes Histogram of Observed Values Predicting the pdf based on the observed values 4 300 2 200 Outcome Count 0 100 −2 −4 0 0 500 1000 −4 −2 0 2 4 Trial number Bin 4 150 2 Outcome 100 Count 0 50 −2 −4 0 0 500 1000 −2 0 2 Trial number Bin EE456 – Digital Communications 12

  13. Chapter 3: Probability, Random Variables, Random Processes Expectations (Statistical Averages) of a Random Variable Expectations ( statistical averages , or moments ), play an important role in the characterization of the random variable. The expected value (also called the mean value, first moment) of the random variable x is defined as � ∞ m x = E { x } ≡ xf x ( x )d x, −∞ where E denotes the statistical expectation operator . In general, the n th moment of x is defined as � ∞ E { x n } ≡ x n f x ( x )d x. −∞ For n = 2 , E { x 2 } is known as the mean-squared value of the random variable. The n th central moment of the random variable x is: � ∞ E { y } = E { ( x − m x ) n } = ( x − m x ) n f x ( x )d x. −∞ When n = 2 the central moment is called the variance , commonly denoted as σ 2 x : � ∞ σ 2 x = var( x ) = E { ( x − m x ) 2 } = ( x − m x ) 2 f x ( x )d x. −∞ EE456 – Digital Communications 13

  14. Chapter 3: Probability, Random Variables, Random Processes The variance provides a measure of the variable’s “randomness”. The mean and variance of a random variable give a partial description of its pdf. Relationship between the variance, the first and second moments: x = E { x 2 } − [ E { x } ] 2 = E { x 2 } − m 2 σ 2 x . An electrical engineering interpretation: The AC power equals total power minus DC power. The square-root of the variance is known as the standard deviation , and can be interpreted as the root-mean-squared (RMS) value of the AC component. EE456 – Digital Communications 14

  15. Chapter 3: Probability, Random Variables, Random Processes Examples Example 1: Find the mean and variance of the Bernoulli random variable defined on Page 6. Example 2: Find the mean and variance of the uniform random variable defined on Page 7. Example 3: Prove that the mean and variance of the Gaussian random variable defined on Page 8 are exactly µ and σ 2 , respectively. Example 4: Consider two random variables, x and y with the pdfs given as in the below figure. Then answer the following: Are the means of x and y are the same or different? Are the variances of x and y are the same or different? If they are different, which random variable has a larger variance? Compute the means and variances of x and y . f ( ) y f ( ) x y x 1 1 y x − − 1 0 1 1 0 1 EE456 – Digital Communications 15

  16. Chapter 3: Probability, Random Variables, Random Processes Statistical (or Ensemble) Averages vs. Empirical (Sample) Averages Statistical averages of a random variable are found from its pdf. For the statistical averages to be useful, the pdf has to be accurate enough. Empirical averages are obtained based on the actual observations (no pdf is needed). Let x 1 , x 2 , . . . , x N are actual observations (outcomes) of random variable x . The empirical averages of x and x 2 would be calculated as follows: � N n =1 x n = (1) x N � N n =1 x 2 n x 2 = (2) N If the number of observations N , i.e., the data size, is large enough, the above empirical averages should be very close to the statistical averages � ∞ � ∞ −∞ xf x ( x )d x and E { x 2 } = −∞ x 2 f x ( x )d x . E { x } = Example 1: The Matlab command randn(1,10) generates 10 values of a Gaussian random variable whose pdf is N (0 , 1) . Compute the “mean” and “mean squared value” of the random variable based on the following actual observations: 0.5377 1.8339 -2.2588 0.8622 0.3188 -1.3077 -0.4336 0.3426 3.5784 2.7694 Example 2: Repeat the calculations for 10 values of a uniform random variable 2*rand(1,10) : 1.3115 0.0714 1.6983 1.8680 1.3575 1.5155 1.4863 0.7845 1.3110 0.3424 EE456 – Digital Communications 16

Recommend


More recommend