Random Signals and Noise
Distribution Functions The distribution function of a random variable X is the probability that it is less than or equal to some value, as a function of that value. ( ) = P X ≤ x ⎡ ⎤ F X x ⎣ ⎦ Since the distribution function is a probability it must satisfy the requirements for a probability. ( ) ≤ 1 , − ∞ < x < ∞ 0 ≤ F X x ( ) − F X x 1 ( ) ⎡ ⎤ P x 1 < X ≤ x 2 ⎦ = F X x 2 ⎣ ( ) is a monotonic function and its derivative is never negative. F X x
Distribution Functions The distribution function for tossing a single die ( ) + u x − 2 ( ) + u x − 3 ( ) ⎡ ⎤ u x − 1 ( ) = 1/ 6 ( ) ⎢ ⎥ F X x ( ) + u x − 5 ( ) + u x − 6 ( ) + u x − 4 ⎢ ⎥ ⎣ ⎦
Distribution Functions A possible distribution function for a continuous random variable
Probability Density The derivative of the distribution function is the probability density function ( PDF ) ( ) ( ) ≡ d ( ) p X x dx F X x Probability density can also be defined by ( ) dx = P x < X ≤ x + dx ⎡ ⎤ p X x ⎣ ⎦ Properties ∞ ( ) ≥ 0 , − ∞ < x < + ∞ ( ) dx ∫ = 1 p X x p X x −∞ x 2 x ( ) = ( ) d λ ( ) dx ∫ ∫ ⎡ ⎤ p X λ P x 1 < X ≤ x 2 ⎦ = F X x p X x ⎣ −∞ x 1
Expectation and Moments Imagine an experiment with M possible distinct outcomes M performed N times. The average of those N outcomes is X = 1 ∑ n i x i N i = 1 where x i is the i th distinct value of X and n i is the number of M M n i M times that value occurred. Then X = 1 ∑ ∑ ∑ = = n i x i N x i r i x i . N i = 1 i = 1 i = 1 The expected value of X is n i M M M ( ) = lim ∑ ∑ ∑ ⎡ ⎤ = lim = P X = x i E X N x i r i x i ⎦ x i . ⎣ N →∞ N →∞ i = 1 i = 1 i = 1
Expectation and Moments The probability that X lies within some small range can be ⎡ ⎤ approximated by P x i − Δ x 2 < X ≤ x i + Δ x ( ) Δ x ⎥ ≅ p X x i ⎢ 2 ⎣ ⎦ and the expected value is then approximated by ⎡ ⎤ P x i − Δ x 2 < X ≤ x i + Δ x M M ( ) = ( ) Δ x ∑ ∑ ≅ E X ⎥ x i x i p X x i ⎢ 2 ⎣ ⎦ i = 1 i = 1 where M is now the number of subdivisions of width Δ x of the range of the random variable.
Expectation and Moments ∞ ( ) = ( ) dx ∫ In the limit as Δ x approaches zero, E X x p X x . −∞ ∞ ( ) = ( ) ( ) p X x ( ) dx ∫ Similarly E g X g x . −∞ ∞ ( ) = ( ) dx x n p X x ∫ The n th moment of a random variable is E X n . −∞
Expectation and Moments The first moment of a random variable is its expected value ∞ ( ) = ( ) dx ∫ E X x p X x . The second moment of a random variable −∞ is its mean-squared value (which is the mean of its square, not the square of its mean). ∞ ( ) = ( ) dx x 2 p X x ∫ E X 2 −∞
Expectation and Moments A central moment of a random variable is the moment of that random variable after its expected value is subtracted. ∞ ( ) ( ) ( ) dx ⎛ ⎞ n n ∫ ⎡ ⎤ ⎡ ⎤ X − E X ⎠ = x − E X E p X x ⎣ ⎦ ⎣ ⎦ ⎝ −∞ The first central moment is always zero. The second central moment (for real-valued random variables) is the variance , ∞ ( ) ( ) ( ) dx ⎛ ⎞ 2 = E 2 2 ∫ ⎡ ⎤ ⎡ ⎤ σ X X − E X ⎠ = x − E X p X x ⎣ ⎦ ⎣ ⎦ ⎝ −∞ The positive square root of the variance is the standard deviation .
Expectation and Moments Properties of Expectation ⎛ ⎞ ( ) = a , E aX ( ) = a E X ( ) , E ( ) ∑ ∑ ⎟ = E a X n E X n ⎜ ⎝ ⎠ n n where a is a constant. These properties can be use to prove ( ) − E 2 X ( ) . The variance of 2 = E X 2 the handy relationship σ X a random variable is the mean of its square minus the square of its mean.
Joint Probability Density Let X and Y be two random variables. Their joint distribution ( ) ≡ P X ≤ x ∩ Y ≤ y ⎡ ⎤ function is F XY x , y ⎦ . ⎣ ( ) ≤ 1 , − ∞ < x < ∞ , − ∞ < y < ∞ 0 ≤ F XY x , y ( ) = F XY x , −∞ ( ) = F XY −∞ , y ( ) = 0 F XY −∞ , −∞ ( ) = 1 F XY ∞ , ∞ ( ) does not decrease if either x or y increases or both increase F XY x , y ( ) = F ( ) and F XY x , ∞ ( ) = F X x ( ) F XY ∞ , y Y y
Joint Probability Density ( ) ∂ 2 ( ) = ( ) p XY x , y ∂ x ∂ y F XY x , y ( ) ≥ 0 , − ∞ < x < ∞ , − ∞ < y < ∞ p XY x , y ∞ ∞ y x ( ) dx ( ) = ( ) d α ∫ ∫ ∫ ∫ = 1 F XY x , y p XY α , β d β p XY x , y dy −∞ −∞ −∞ −∞ ∞ ∞ ( ) = ( ) dy ( ) = ( ) dx ∫ ∫ p X x p XY x , y and p Y y p XY x , y −∞ −∞ y 2 x 2 ( ) dx ∫ ∫ ⎡ ⎤ P x 1 < X ≤ x 2 , y 1 < Y ≤ y 2 ⎦ = p XY x , y dy ⎣ y 1 x 1 ∞ ∞ ( ) = ( ) ( ) p XY x , y ( ) dx ∫ ∫ E g X , Y g x , y dy −∞ −∞
Independent Random Variables If two random variables X and Y are independent the expected value of their product is the product of their expected values. ∞ ∞ ∞ ∞ ( ) = ( ) dx ( ) dy ( ) dx ( ) E Y ( ) ∫ ∫ ∫ ∫ = = E X E XY xy p XY x , y dy y p Y y x p X x −∞ −∞ −∞ −∞
Independent Random Variables Covariance ( ) ( ) ⎛ ⎞ * ⎡ ⎤ ⎡ ⎤ σ XY ≡ E X − E X ⎦ Y − E Y ⎣ ⎣ ⎦ ⎝ ⎠ ( ) p XY x , y ∞ ∞ ( ) y * − E Y * ( ) ( ) ( ) dx ∫ ∫ σ XY = x − E X dy −∞ −∞ ( ) − E X ( ) ( ) E Y * σ XY = E XY * ( ) − E X ( ) = 0 ( ) E Y * ( ) E Y * If X and Y are independent, σ XY = E X
Independent Random Variables Correlation Coefficient ( ) ( ) Y * − E Y * ⎛ ⎞ X − E X ρ XY = E × ⎜ ⎟ ⎜ σ X σ Y ⎟ ⎝ ⎠ ( ) ( ) y * − E Y * ⎛ ⎞ ⎛ ⎞ x − E X ∞ ∞ ( ) dx ∫ ∫ ρ XY = ⎜ ⎟ ⎟ p XY x , y dy ⎜ ⎟ σ X ⎜ σ Y ⎝ ⎠ ⎝ ⎠ −∞ −∞ ( ) − E X ( ) ( ) E Y * E XY * = σ XY ρ XY = σ X σ Y σ X σ Y If X and Y are independent ρ = 0. If they are perfectly positively correlated ρ = + 1 and if they are perfectly negatively correlated ρ = − 1.
Independent Random Variables If two random variables are independent, their covariance is zero. However, if two random variables have a zero covariance that does not mean they are necessarily independent. Independence ⇒ Zero Covariance Zero Covariance ⇒ Independence
Independent Random Variables In the traditional jargon of random variable analysis, two “uncorrelated” random variables have a covariance of zero. Unfortunately, this does not also imply that their correlation is zero. If their correlation is zero they are said to be orthogonal . X and Y are "Uncorrelated" ⇒ σ XY = 0 ( ) = 0 X and Y are "Uncorrelated" ⇒ E XY
Independent Random Variables The variance of a sum of random variables X and Y is 2 + σ Y 2 + 2 σ XY = σ X 2 + σ Y 2 + 2 ρ XY σ X σ Y σ X + Y = σ X 2 If Z is a linear combination of random variables X i N ∑ Z = a 0 + a i X i i = 1 N ( ) = a 0 + ( ) ∑ then E Z a i E X i i = 1 N N N N N 2 = ∑ ∑ ∑ ∑ ∑ σ Z a i a j σ X i X j = 2 σ X i + a i a j σ X i X j 2 a i i = 1 j = 1 i = 1 i = 1 j = 1 i ≠ j
Independent Random Variables If the X ’s are all independent of each other, the variance of the linear combination is a linear combination of the variances. N 2 = ∑ σ Z 2 σ X i 2 a i i = 1 If Z is simply the sum of the X ’s, and the X ’s are all independent of each other, then the variance of the sum is the sum of the variances. N 2 = ∑ σ Z σ X i 2 i = 1
Probability Density of a Sum of Random Variables Let Z = X + Y . Then for Z to be less than z , X must be less than z − Y . Therefore, the distribution function for Z is z − y ∞ ( ) = ( ) dx ∫ ∫ F Z z p XY x , y dy −∞ −∞ ⎛ ⎞ z − y ∞ ( ) = ( ) ( ) dx ∫ ∫ If X and Y are independent, F Z z p Y y p X x dy ⎜ ⎟ ⎝ ⎠ −∞ −∞ ∞ ( ) = ( ) p X z − y ( ) dy ( ) ∗ p X z ( ) ∫ = p Y z and it can be shown that p Z z p Y y −∞
The Central Limit Theorem If N independent random variables are added to form a resultant N ∑ random variable Z = X n then n = 1 ( ) = p X 1 z ( ) ∗ p X 2 z ( ) ∗ p X 2 z ( ) ∗ ∗ p X N z ( ) p Z z and it can be shown that, under very general conditions, the PDF of a sum of a large number of independent random variables with continuous PDF’s approaches a limiting shape called the Gaussian PDF regardless of the shapes of the individual PDF’s.
The Central Limit Theorem
The Central Limit Theorem The Gaussian pdf ( ) = 1 ( ) 2 /2 σ X 2 − x − µ X p X x e σ X 2 π ( ) and σ X = ( ) ⎛ ⎞ 2 ⎡ ⎤ µ X = E X X − E X E ⎣ ⎦ ⎝ ⎠
The Central Limit Theorem The Gaussian PDF Its maximum value occurs at the mean value of its argument. It is symmetrical about the mean value. The points of maximum absolute slope occur at one standard deviation above and below the mean. Its maximum value is inversely proportional to its standard deviation. The limit as the standard deviation approaches zero is a unit impulse. ( ) = lim 1 ( ) 2 /2 σ X 2 − x − µ X δ x − µ x e σ X 2 π σ X → 0
Recommend
More recommend