introduction
play

Introduction Ping Yu School of Economics and Finance The - PowerPoint PPT Presentation

Introduction Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Introduction 1 / 31 Course Information Time: 9:30-10:45am and 11:00-12:15pm on Saturday. Location: KKLG103 Office Hour: 2:00-3:00pm on Friday ,


  1. Introduction Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Introduction 1 / 31

  2. Course Information Time: 9:30-10:45am and 11:00-12:15pm on Saturday. Location: KKLG103 Office Hour: 2:00-3:00pm on Friday , KKL1108 (extra?) Textbook: My lecture notes posted on Moodle. - Others: Hayashi (2000), Ruud (2000), Cameron and Trivedi (2005), Hansen (2007) and Wooldridge (2010). - References: no need to read unless necessary. Exercises: no need to turn in, but necessary to pass this course. - Solve the associated analytical exercises in the lecture notes covered during this week, and I will post answer keys to these exercises just before the next class (remind me if I did not). - At the end of each chapter, there are one or two empirical exericses. Do them only after the whole chapter is finished. Use matrix programming languages such as Matlab or Gauss; do not use Stata like softwares. Evaluation: Midterm Test (40%), Final Exam (60%) The exams are open-book and open-note. Ping Yu (HKU) Introduction 2 / 31

  3. What is Econometrics? What will This Course Cover? Ragnar Frisch (1933): unification of statistics, economic theory, and mathematics. Linear regression and its extensions. The objective of econometrics and microeconometrics, and the role of economic theory in econometrics. Main econometric approaches. We will concentrate on linear models, i.e., linear regression and linear GMM, in this course. Nonlinear models are discussed only briefly. Sections, proofs, exercises, paragraphs or footnotes indexed by * are optional and will not be covered in this course. I may neglect or add materials beyond my notes (depending on your backgrounds and time constraints). Just follow my slides and read the corresponding sections. Ping Yu (HKU) Introduction 3 / 31

  4. Linear Regression and Its Extensions Linear Regression and Its Extensions Ping Yu (HKU) Introduction 4 / 31

  5. Linear Regression and Its Extensions Return to Schooling: Our Starting Point Suppose we observe f y i , x i g n i = 1 , where y i is the wage rate, x i includes education and experience, and the target is to study the return to schooling or the relationship between y i and x i . The most general model is y = m ( x , u ) , (1) where x = ( x 1 , x 2 ) 0 with x 1 being education and x 2 being experience, u is a vector of unobservable errors (e.g., the innate ability, skill, quality of education, work ethic, interpersonal connection, preference, and family background), which may be correlated with x (why?), and m ( � ) can be any (nonlinear) function. To simplify our discussion, suppose u is one-dimensional and represents the ability of individuals. Notations : real numbers are written using lower case italics. Vectors are defined as column vectors and represented using lowercase bold. Ping Yu (HKU) Introduction 5 / 31

  6. Linear Regression and Its Extensions Nonadditively Separable Nonparametric Model In (1), the return to schooling is ∂ m ( x 1 , x 2 , u ) , ∂ x 1 which depends on the levels of x 1 and x 2 and also u . In other words, for different levels of education, the returns to schooling are different; furthermore, for different levels of experience (which is observable) and ability (which is unobservable), the returns to schooling are also different. This model is called the NSNM since u is not additively separable. Ping Yu (HKU) Introduction 6 / 31

  7. Linear Regression and Its Extensions Additively Separable Nonparametric Model ASNM : y = m ( x ) + u . In this model, the return to schooling is ∂ m ( x 1 , x 2 ) , ∂ x 1 which depends only on observables. A special case of this model is the additive separable model (ASM) where m ( x ) = m 1 ( x 1 ) + m 2 ( x 2 ) . In this case, the return to schooling is ∂ m ( x 1 ) , which depends only on x 1 . ∂ x 1 Ping Yu (HKU) Introduction 7 / 31

  8. Linear Regression and Its Extensions Random Coefficient Model There is also the case where the return to schooling depends on the unobservable but not other covariates. For example, suppose y = α ( u ) + m 1 ( x 1 ) β 1 ( u ) + m 2 ( x 2 ) β 2 ( u ) , and then the return to schooling is ∂ m 1 ( x 1 ) β 1 ( u ) , ∂ x 1 which does not depend on x 2 but depend on x 1 and u . A special case is the RCM where m 1 ( x 1 ) = x 1 and m 2 ( x 2 ) = x 2 . In this case, the return to schooling is β 1 ( u ) which depends only on u . Ping Yu (HKU) Introduction 8 / 31

  9. Linear Regression and Its Extensions Varying Coefficient Model The return to schooling may depend only on x 2 and u . For example, if y = α ( x 2 , u ) + x 1 β 1 ( x 2 , u ) , then the return to schooling is β 1 ( x 2 , u ) which does not depend on x 1 . A special case is the VCM where y = α ( x 2 ) + x 1 β 1 ( x 2 ) + u , and the return to schooling is β 1 ( x 2 ) depending only on x 2 . Ping Yu (HKU) Introduction 9 / 31

  10. Linear Regression and Its Extensions Linear Regression Model When the return to schooling does not depend on either ( x 1 , x 2 ) or u , we get the LRM , y = α + x 1 β 1 + x 2 β 2 + u � x 0 β + u , where x � ( 1 , x 1 , x 2 ) 0 , β � ( α , β 1 , β 2 ) 0 , and the return to schooling is β 1 which is constant. Summary: X X X X x 1 X X X X x 2 X X X X u Model NSNM ASNM ? ? ASM VCM RCM LRM Table 1: Models Based on What The Return to Schooling Depends on Other popular models: - The VCM can be simplified to the partially linear model (PLM), where y = α ( x 2 ) + x 1 β 1 + u . - Combining the LRM and the ASNM, we get the single index model (SIM) where y = m ( x 0 β ) + u . Ping Yu (HKU) Introduction 10 / 31

  11. Linear Regression and Its Extensions Other Dimensions x and u are uncorrelated (or even independent) and x and u are correlated. In the former case, x is called exogenous , and in the latter case, x is called endogenous . Limited dependent variables (LDV): part information about y is missing. Single equation versus Multiple equations. Different characteristics of the conditional distribution of y given x . - Conditional mean or conditional expectation function (CEF) Z Z m ( x ) = E [ y j x ] = yf ( y j x ) dy = m ( x , u ) f ( u j x ) du , where f ( y j x ) is the conditional probability density function (pdf) or the conditional probability mass function (pmf) of y given x . - Conditional quantile Q τ ( x ) = inf f y j F ( y j x ) � τ g , τ 2 ( 0 , 1 ) , where F ( y j x ) is the conditional cumulative probability function (cdf) of y given x . Especially, Q . 5 ( x ) is the conditional median. Ping Yu (HKU) Introduction 11 / 31

  12. Linear Regression and Its Extensions What We will Cover? - Conditional variance h ( y � m ( x )) 2 � i � σ 2 ( x ) = Var ( y j x ) = E � x , which measures the dispersion of f ( y j x ) . �� y � m ( x ) � 3 � � � � - Conditional skewness E � x which measures the asymmetry of f ( y j x ) . σ ( x ) � 4 � �� y � m ( x ) � � � - Conditional kurtosis E � x which measures the heavy-tailedness of σ ( x ) f ( y j x ) . LRM Conditional Mean � Exogeneity Endogeneity � Single Equation . � . . . Multiple Equation . . LDV Conditional Mean � Exogeneity Endogeneity � Single Equation . � . . . Multiple Equation . . Ping Yu (HKU) Introduction 12 / 31

  13. Linear Regression and Its Extensions A Real Example 0.14 f (Wage|Female) f (Wage|Male) Mean(Wage|Female) 0.12 Median(Wage|Female) Mean(Wage|Male) Median(Wage|Male) 0.1 0.08 0.06 0.04 0.02 0 0 6.7 7.9 9.0 10.1 30 Figure: Wage Densities for Male and Female from the 1985 CPS Ping Yu (HKU) Introduction 13 / 31

  14. Linear Regression and Its Extensions What Can We Get From The Figure? These are conditional densities - the density of hourly wages conditional on gender. First, both mean and median of male wage are larger than those of female wage. Second, for both male and female wage, median is less than mean, which indicates that wage distributions are positively skewed. This is corroborated by the fact that the skewness of both male and female wage is greater than zero (1.0 and 2.9, respectively). Third, the variance of male wage (27.9) is greater than that of female wage (22.4). Fourth, the right tail of male wage is heavier than that of female wage. Ping Yu (HKU) Introduction 14 / 31

  15. Econometrics, Microeconometrics and Economic Theory Econometrics, Microeconometrics and Economic Theory Ping Yu (HKU) Introduction 15 / 31

Recommend


More recommend