Class Note for ECON 520 with Professor Hirano at UA
Class Note for ECON 520 with Professor Hirano at UA
Popular in Course
Popular in Department
This 6 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Arizona taught by a professor in Fall. Since its upload, it has received 17 views.
Reviews for Class Note for ECON 520 with Professor Hirano at UA
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
Lecture Note 2 Extremum Estimators and GMM Based on Newey and McFadden 1994 Extremum Estimator maximizes over 9 E 9 Here 9 is the set of possible parameter values We use a hat in Qn to indicate that the objective function can depend on data7 and the n subscript indicates that it can depend on sample size It turns out that many estimators can be viewed as extremum estimators We ll look at a number of examples next Maximum Likelihood Estimator Let 2122 72 be HD with PDF fzl00 for 00 E 9 Then the MLE solves 1 n m ax glog aw So this is an extremum estimator with we 3 ilog zilo n i1 We can also handle conditional MLE Suppose 2i yi mi and let aw denote the conditional density of y given z yilzi Example logit regression Suppose yi is binary with expx p 1 l y W 1 expem Then the conditional likelihood function is 7 expz yi 1 172 Plty1quotquot y quot m HltWgt 39 V L 11 The MLE B maximizes the likelihood Equivalently7 we can maximize the log likelihood expz 1 expwm 17 24010 71 logPy1ynlm1mm Zyilog g 11 1expzl We can also multiply the log likelihood by 171 to get n A 7 1 I expz 7 I i gy110g1 exp 1 1 yl10g1 exp 7 and MLE arg IBEX Qn Although in most cases we cannot solve for the value of 3 by hand the log likelihood function can be shown to be globally concave so relatively simple numerical methods can be used to nd the MLE Nonlinear Least Squares Let 2 yi mi and suppose that Elylml 71907 0o For example we might have Eiylzi ewe00gt The nonlinear least squares NLS estimator minimizes This is equivalent to maximizing lH QM e Zo e hat70gt i1 LAD estimator Suppose that y is a random variable The median of y denoted Medy is any number 0 such that Plty S 22127 Py 2 c212 The idea is that half the probability mass of y is above 0 and half the probability mass is below 0 A useful result is that the median of y solves mgnEHy 7 CH Suppose that the median of y given x has the form Medylz mm 00gt for a known function m For example if the conditional median is a linear function of m Medylm 00 A natural estimator based on a random sample of ghzi i 1 2 n is the least absolute deviations estimator7 de ned as 1 71 t9 argmgn lyi mWi ll l or maximizing the negative of the sum of absolute deviations GMM Estimator Suppose we have a moment function 927 0 such that7 at the true 00 El927 00 0 Here 927 0 is a k vector valued function so that the preceding display should be interpreted as k equality restrictions Let W be a positive semide nite matrix So for a vector m7 mWm12 can be thought of as a distance of the vector m from O The GMM estimator maximizes 1 1 we i Zlgem W 299130 l l 71 The interpretation is that tries to minimize the distance of gZ Zi i1 from O that is7 the GMM estimator tries to set the sample version of Eg27 0 as close as possible to 0 Example of GMM Linear IV Suppose zl yhmhvi where w 96200 Eiv but xi may be correlated with q The variable Ul39 is an instrumental variable E Wei 0 Rewrite this as 7 So we can take gee vlty 7 M Suppose that both 1 and z are kivectors Then 92 0 is k x 1 as is 0 So we can typically nd a value that exactly solves V L Zgzi0 0 i1 Then this will be the solution to the GMM problem for any matrix lf dimv gt dimz however it will typically not be possible to choose a 9 to set izj gzi0 exactly equal to 0 Different choices for W will lead to different solutions to the GMM problem 1 1 i1 A popular choice for W is The solution for this weighting matrix turns out to be the two stage least squares TSLS estimator Exercise show that this choice for W leads to TSLS Another example of GMM Euler Equations Hansen and Singleton 1982 Suppose a consumer is choosing a consumption stream 01 cT to maximize expected utility under a constant relative risk aversion utility function cl 7 1 uc Y The consumer s maximization problem is T max E 3711403 1cT t1 subject to a dynamic budget constraint Here 3 is the rate of time preference The rsr oder Euler conditions for a maximum are 771 3 7 1 1 0 for all t where L denotes the information available to the agent at time t E 0 7 Let 9 B 39y and 71 pzt0 1 So we have Epzt 00lIt 0 Now suppose mt are variables that are included in the information set at time t For example it could include lagged values of the consumption variable and other lagged variables such as income Then Emtpzt 0 0 So we can de ne 99120 95tPZt707 and we have that Egzt 00 0 This is a nonlinear version of the previous instrumental variables problem Consistency of Extremum Estimators Heuristic example consider the maximum likelihood estimator 1 71 t9 argm ax Elog zim l The criterion function has the form of a sample average and by a law of large numbers converges in probability to c209 Euog ziw 10gfzl0fzl0odz You may recall from Econ 520 that 6209 Elog is maximized at the true value of 0 When the parameter is identi ed 00 is the unique maximizer of 6209 ie 00 argm axQW So it seems plausible that the maximizer of would converge to the maximizer of 209 For this to hold we need some further conditions in particular a uniform notion of convergence in probability Uniform convergence in probability the function converges uniformly in probability to 6209 if sup ln0 Q0l i 0 966 Theorem if there is a function 6209 such that i 6209 is uniquely maximized at 00 ii 9 is compact iii 6209 is continuous iv converges uniformly in probability to 209 then L 00 Proof see Newey and McFadden Remark in some cases particularly simulation based estimators it is not always feasible to nd 7 the true maximizer of Q Suppose we instead have a near maximizer in the sense that 9 sati es QM Z sup nw 0121 966 Then the previous theorem s conclusion continues to hold Some of the conditions in the theorem above are straightforward to check for example compactness of the parameter space is usually assumed Others require more work To show conditions iii and iv the following uniform law of large numbers77 is very handy Uniform Law of Large Numbers suppose that 2 are 11D 9 is compact and we are given a function az 0 such that i for each 9 E 9 az 0 is continuous in z with probability one ii There is a function 12 with Haz 12 for all 9 E 9 and lt 00 Then Eaz 0 is continuous and n 1 2 sup 7 az0 7Ea20 0 968M 1 l lll
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'