Chapter 2: Transformations and Expectations (a recap) STK4011/9011: Statistical Inference Theory Johan Pensar STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 1 / 18
Overview Distributions of Functions of a Random Variable 1 Expected Values 2 Moments and Moment Generating functions 3 Covers parts of Sec 2.1–2.3 in CB. STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 2 / 18
Distributions of Functions of a Random Variable If X is a random variable, then any function Y = g ( X ) is also a random variable. Formally, the function y = g ( x ) maps the original sample space to a new sample space: g : X → Y . Inverse mapping from Y to X : g − 1 ( y ) = { x ∈ X : g ( x ) = y } (or g − 1 ( y ) = x if one-to-one) The probability distribution of Y is defined by P ( Y ∈ A ) = P ( X ∈ g − 1 ( A )) STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 3 / 18
Distributions of Functions of a Discrete Random Variable If X is a discrete random variable and Y = g ( X ), then: the sample space X is countable. the sample space Y = { y : y = g ( x ) , x ∈ X} is countable ( Y discrete r.v.). The pmf of Y is � f Y ( y ) = f X ( x ) , for y ∈ Y , x ∈ g − 1 ( y ) and f Y ( y ) = 0 for y �∈ Y . STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 4 / 18
Example: Binomial transformation Ex 2.1.1: Let X follow a binomial distribution, X ∼ Binomial ( n , p ): � n � p x (1 − p ) n − x , f X ( x ) = x = 0 , 1 , . . . , n , x where n is a positive integer and 0 ≤ p ≤ 1. What is the distribution of the random variable Y = g ( X ) = n − X ? STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 5 / 18
Example: Binomial transformation STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 6 / 18
Distributions of Functions of a Random Variable Thm 2.1.3: Let X have cdf F X ( x ), let Y = g ( X ) ( g is monotone), and let X = { x : f X ( x ) > 0 } and Y = { y : y = g ( x ) for some x ∈ X} . If g is an increasing function on X , then F Y ( y ) = F X ( g − 1 ( y )) for y ∈ Y . If g is a decreasing function on X and X is a continuous random variable, then F Y ( y ) = 1 − F X ( g − 1 ( y )) for y ∈ Y . STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 7 / 18
Distributions of Functions of a Random Variable STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 8 / 18
Example: Uniform-exponential relationship Ex 2.1.4: Let X follow a uniform distribution, X ∼ Uniform (0 , 1): f X ( x ) = 1 , 0 < x < 1 . What is the cdf of the random variable Y = g ( X ) = − log( X )? STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 9 / 18
Example: Uniform-exponential relationship STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 10 / 18
Distributions of Functions of a Random Variable Thm 2.1.5: Let X have pdf f X ( x ) and let Y = g ( X ), where g is a monotone function. Further, let X and Y be defined as in Thm 2.1.3. Assume that f X ( x ) is continuous on X and that g − 1 ( y ) has a continuous derivative on Y . Then, the pdf of Y is given by � � d f X ( g − 1 ( y )) � dy g − 1 ( y ) � for y ∈ Y , � , � � f Y ( y ) = � 0 , otherwise. STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 11 / 18
Distributions of Functions of a Random Variable Thm 2.1.8: Let X have pdf f X ( x ), let Y = g ( X ), and let X be defined as in Thm 2.1.3. Assume that there exists a partition A 0 , A 1 , . . . , A k of X such that P ( X ∈ A 0 ) = 0 and f X ( x ) is continuous on each A i . Further, assume that there exist functions g 1 ( x ) , . . . , g k ( x ), defined on A 1 , . . . , A k , respectively, satisfying g ( x ) = g i ( x ), for x ∈ A i , g i ( x ) is monotone on A i , the set Y = { y : y = g i ( x ) for some x ∈ A i } is the same for each i = 1 , . . . , k , g − 1 ( y ) has a continuous derivative on Y , for each i = 1 , . . . , k . i Then, the pdf of Y is given by � � d � k i =1 f X ( g − 1 dy g − 1 � � ( y )) ( y ) for y ∈ Y , � , � � i i f Y ( y ) = � 0 , otherwise. STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 12 / 18
Probability integral transformation Thm 2.1.10: Let X have a continuous cdf F X ( x ). Then, the random variable Y = F X ( X ) is uniformly distributed on (0 , 1). Can be used to generate samples of a random variable X : Generate a uniform random number u from (0 , 1). 1 Solve for x in the equation F X ( x ) = u . 2 STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 13 / 18
Expected Values Def 2.2.1: The expected value (or mean) of a random variable g ( X ) is �� ∞ −∞ g ( x ) f X ( x ) dx , if X is continuous , � � g ( X ) = E � x ∈X g ( x ) f X ( x ) , if X is discrete , � � � � provided that the integral or sum exists. If E | g ( X ) | = ∞ , we say that E g ( X ) does not exist. STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 14 / 18
Expected Values Thm 2.2.5: Let X be a random variable and let a , b and c be constants. Then, for any functions g 1 ( x ) and g 2 ( x ) whose expectations exist, E ( ag 1 ( X ) + bg 2 ( x ) + c ) = aE ( g 1 ( X )) + bE ( g 2 ( x )) + c . If g 1 ( x ) ≥ 0 for all x , then E ( g 1 ( X )) ≥ 0. If g 1 ( x ) ≥ g 2 ( x ) for all x , then E ( g 1 ( X )) ≥ E ( g 2 ( X )). If a ≤ g 1 ( x ) ≤ b for all x , then a ≤ E ( g 1 ( X )) ≤ b . STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 15 / 18
Moments Def 2.3.1: For each integer n and a random variable X : n = E ( X n ). The n :th moment of X is µ ′ The n :th central moment of X is µ n = E (( X − µ ) n ) , where µ = µ ′ 1 = E ( X ). Def 2.3.2: The variance of a random variable X is its second central moment: Var ( X ) = E (( X − µ ) 2 ) = E ( X 2 ) − E ( X ) 2 . Thm 2.3.4: If X is a random variable with finite variance, then for any constants a and b : Var ( aX + b ) = a 2 Var ( X ) . STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 16 / 18
Moment Generating Function Def 2.3.6: Let X be a random variable with cdf F X . The moment generating function (mgf) of X is then M X ( t ) = E ( e tX ) , provided that the expectation exists for t in some neighborhood of 0, that is, there is an h > 0 such that the mgf exists in − h < t < h . If not, we say that the mgf does not exist. STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 17 / 18
Moment Generating Function Thm 2.3.7: If X has mgf M X ( t ), then E ( X n ) = M ( n ) X (0) , where we define X (0) = d n � M ( n ) � dt n M X ( t ) . � � t =0 The n :th moment is equal to the n :th derivative of the mgf evaluated at t = 0. Although the mgf can be used to generate moments, its main use is in characterizing distributions (see Thms 2.3.11–2.3.12). STK4011/9011: Statistical Inference Theory Chapter 2: Transformations and Expectations (a recap) 18 / 18
Recommend
More recommend