### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# ECONOMETRICS I ECO 5314

TTU

GPA 3.96

### View Full Document

## 39

## 0

## Popular in Course

## Popular in Economcs

This 80 page Class Notes was uploaded by Kaden Orn on Thursday October 22, 2015. The Class Notes belongs to ECO 5314 at Texas Tech University taught by Staff in Fall. Since its upload, it has received 39 views. For similar materials see /class/226396/eco-5314-texas-tech-university in Economcs at Texas Tech University.

## Similar to ECO 5314 at TTU

## Popular in Economcs

## Reviews for ECONOMETRICS I

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/22/15

Maximum hkehhood estimation Maximum likelihood estimation ECO 5314 Dr Peter M Summers Texas Tech University October 27 2008 Maximum likelihood estimation Lagrange Multiplier score test gt Consider the constrained optimization problem max In Lt9102 l X092 7 7 i1 where the restrictions apply to a subset 02 of 9 gt The first order conditions imply 8nL A n 8n L Z 802 l92q 250092 q i1 gt So LM test measures the extent to which the first order conditions are violated at 02 q Maximum iikeiihood estimation Lagrange Multiplier score test A gt Using outer product to estimate V09 the LM statistic is n n 71 n LM 235 s0s0gt 235 i1 i1 i1 where 5 l q gt Write the vector of score contributions as 51097 S 5n 0 wequot 55 and 27135 su Maximum likelihood estimation Lagrange Multiplier score test gt The LM test statistic becomes n LM Zs y lt2 i1 i1 LS5571L NLS5571L 35 i1 71 5i 5i0gt 7 UL gt Consider regressing a column of ones L on the columns of 5 gt OLS estimates are 55 1S L gt ESS L S S S 1S L exercise gt So LM NR2 where the R2 is from the auxiliary regression Drscrete Chorce Mode s Discrete Choice Models ECO 5314 Dr Peter M Summers November 12 2008 Discrete Choice Models Readings gt Greene Ch 23 gt Verbeek Ch 7 gt Articles on my web site gt Alan Krueger and Jitka Maleckova Education Poverty and Terrorism Is There a Causal Connection Journal of Economic Perspectives Fall 2003 119144 V The Relationship between Economic Status and Child Health Evidence from the United States by Simon CondlifFe and Charles R Link American Economic Review Sept 2008 16051618 Discrete Choice Models Binary Y variable gt So far binary dummy categorical variables have only shown up as X39s gender region etc gt What about binary Y Examples gt Y get into college or not X high school GPA family income gt Y smoke or not X income Y mortgage application denied X race financial variables gt Y died in Hezbollah military event X education income age area of residence gt Child in poor health now if history of poor health V Discrete Choice Models Example Boston HMDA data gt Individual applications for singlefamily mortgages made in 1990 in the greater Boston area gt 2380 observations collected under Home Mortgage Disclosure Act HMDA gt See httpcaculatedriskblogspotcom200710hmdadata onhighpricedIoanshtml for excellent background gt Variables gt Y Is the mortgage denied Y 1 or accepted gt X Income wealth employment status gt X Other loan 84 property info eg size of loan relative to house value gt X Applicant39s race Discrete Choice Models Linear b b b b probability model lpm Suppose we just do what we39ve been doing OLS Estimate regression of loan denial on payment income ratio in 70091 0559 x Pl ratio 0177 x black This is the linear probability model gr gives the predicted probability of being denied given the Pl ratio 31 0559 gives the increase in probability when Pl ratio changes gt eg PI increase by 01 means probability increases by 56 0 1 X 05559 0056 African American applicants 177 more likely to be denied t stat 711 Discrete Choice Models Linear probability model lpm gt The linear probability model has some advantages gt Easy to use and interpret gt Inference is the same as in multiple regression tstats pvalues etc gt And some disadvantages gt Is linearity a sensible assumption gt Predicted values can be less than 0 or greater than 1 gt Heteroskedastic errors Dwscrete Chowce Mode s Mortgage denial and PI ratio 06 7 Mongage denied Lmear Probability Mode Mortgage approved x 04 05 06 07 08 PI ratio Discrete Choice Models Other limited dependent variable models gt y can only take values from a finite set gt Ordered strongy agree agree neutral disagree stroneg disagree gt Categorical Econ major 1 Business 2 Engineering 3 etc gt Count data y only takes meaningful integer values eg goals scored by England when playing in Scotland 0 1 2 gt Censored or truncated data y continuous but not always observed eg expenditure on alcohol or tobacco gt Often an implicit underlying utility maximization problem with outcome interpreted as choice Discrete Choice Models Probit and logit regression gt Moving away from a linear model we want to have gt 0 g Pry 1lx g 1 for all values ofx gt Pry 1lx increases when X increases if 51 gt O gt General approach is to model Preventj occurs Pry j 1505 0 FX 1 VXB gt 2 most commonly used models are the probt and logt gt Probit Pry 1lx1xz dgt Bo 51X1 zxz z is the standard Normal cdf o 1x1 zxz e gt Logit Pry 1lx17X2 16 o31x132X2 Discrete Choice Models Maximum Likelihood Estimation gt Coin flipping PrY 1 heads p This is a Bernoulli trial Bernoulli distribution is PrY y Pyll 7 P1 y gt Joint probability of 2 independent coin flips P lyl y1 Y2 Pylll Plliylpygll Pl liy2 py1y21 p27y1y2 gt Joint probability of n independent coin flips n pr i p1 y i1 LP PZ LWI 7 Wild PrY1 y17Yn yn Discrete Choice Models Maximum Likelihood Estimation gt Likelihood of the probit model same idea but now p px gt Joint probability of n independent coin flips PrY1 y17 Yn yr H PXiyl1 PlXillliy i1 where pX ltlgt o B1X1 Bka 700 27m2 202 dgtX y exp iiw 7 30 61X1 BkaDZgtA Dwscrete Chowce Mode s Mortgage denial and PI ratio Mortgage denied Of 7 0394 Pmbil Mode Log n Model Mcngage approved MA I 02 03 04 05 00 07 08 PI ratio Discrete Choice Models Marginal effects gt Linear model constant marginal effects Ele X5 e aElYle 7 39 7 J gt But in the probability model Elle 0 X 1 FX 1XFX POW aElle WWW B 8X dX3 XBWJ where f is the pdf corresponding to F Discrete Choice Models Latent regression V Consider the labor supply decision of a married woman y 1 if working outside the home y 0 otherwise V Utility maximization suggests she39ll take thejob if doing so makes her ampor her family better off Let y Ujob 7 Unojob y is latent unobserved but depends on observable characteristics number amp age of kids husband39s income age schooling etc V V Latent regression model y X3 6 y 1 if y gt O 0 if y g 0 Maximum hkehhood estimation Maximum likelihood estimation ECO 5314 Dr Peter M Summers Texas Tech University October 20 2008 Maximum likelihood estimation Maximum likelihood estimation gt Readings V VVV Verbeek ch 6 Greene ch 16 Hayashi notes on my web page Buse A 1982 The likelihood ratio Wald and Lagrange multiplier tests and expository note The American Statistician 36 153157 Maximum likelihood estimation Joint probability and likelihood functions gt sample of n observations on a data vector y y17y2 7y gt probability distribution or density of y is indexed by a finite dimensional parameter vector 0 pdf ofy is Y 0 gt The set of all possible values of 5 is the parameter space 0 c e g W gt Y 5 specifies a model which is a set of possible distributions of y The model is correctly specified if the parameter space contains the true parameter vector 0 9 C e Maximum likelihood estimation Joint probability and likelihood functions gt sample of n observations on a data vector y y17y2 7y gt probability distribution or density of y is indexed by a finite dimensional parameter vector 0 pdf ofy is Y 0 gt The set of all possible values of 5 is the parameter space 0 c e g W gt Y 5 specifies a model which is a set of possible distributions of y The model is correctly specified if the parameter space contains the true parameter vector 0 9 C e Maximum likelihood estimation Joint probability and likelihood functions gt The likelihood function is the joint pdf viewed as a function of 92 L09 E fy0 gt The maximum likelihood estimator is the value of 5 that maximizes the log likelihood function me arg max L05 969 arg maxln 969 Maximum likelihood estimation Example Bernoulli trial We have a large bin with red and yellow balls and want to estimate p the proportion of red balls We take a random sample of N balls with replacement Let y 1 if ball 139 is red y 0 otherwise and suppose we get N1 red balls gt The probability of getting N1 red balls out of N total is PrN1 red7 N7 N1 yellow pN117 pN N1 Lp ln Lp N1lnp N 7 N1 ln17 p gt SO mle Maximum likelihood estimation Example Normal linear regression We have a sample of size n from a linear regression model y X e with e N N0702 Vi So 9 370 gt For the ith observation leilX Z 1 2 ex 77 iixi W p l 02 y B gt The conditional likelihood function is L0lyX fle 702 Hill I mWag gt The log likelihood function is 1 L0lyX Z 1 I mWU i1 7 In02 n27r yrxl VUzl i1 Maxrmum hkehhood esumamon Example Normal linear regression In L0yX Zln f i v 72 i1 7 n Ina2 n27r y XIBVUZ 2 11 or in matrix form 2 i 7 2 n27r 2 Ina 2a2 y X y X3 1 2 7 2n27r 2Ina 202R Maximum likelihood estimation Example Normal linear regression In L5a2iyX 7 n27r 7 g ina2 7 2fizssmm gt Maximizing wrt 3 means minimizing wrt SSRW we get mle gals XX71Xy gt Plug 3 into In L to get the concentrated log likelihood conc n L072 7g l n27r 7 g In 02 7 Qee where e are the OLS residuals Maximum hkehhood estimation Example Normal linear regression 1 conc In L072 7 n27r 7 g In 02 7 ree a gt Now max wrt 02 to get 1 55R n7 k A2 A2 Tme Eele n 005 gt ml estimator of 02 is biased but consistent Maximum likelihood estimation Score information matrix and Cram r Rao lower bound gt The score vector is the vector of partial derivatives of In L05 55 8n 80 gt The information matrix is the matrix of second moments of the score In evaluated at the true value 0 10 Es0s0 Maximum likelihood estimation Score information matrix and Cram r Rao lower bound gt Cram r Rao lower bound under certain regularity conditions39 basically being able to interchange expectations integration and differentiation the asymptotic variance of a consistent and asymptotically normal estimator of 9 will be at least as large as the inverse of the information matrix Var y 2 WP gt In addition the information matrix is equal to minus the expected value of the Hessian matrix of second partials of the log likelihood evaluated at the true value 0 2 n 10 7E vs Maximum likelihood estimation Asymptotic properties of MLE As long as In L09 satisfies certain regularity conditions 0me will be Greene Thm 161 gt Consistent plim me 00 gt Asymptotically normal me Na Nt907 100 1 gt Asymptotically efficient me achieves the Cram r Rao lower bound gt lnvariant ifg0 is a continuous and continuously A differentiable function of 0 then the MLE of g0 is g0m5 gt Regularity ln L09 is well approximated by a 2 d order Taylor series expansion Greene definition 163 Maximum likelihood estimation The asymptotic covariance matrix Statistical inference requires the covariance matrix of me 100 1 3 possibilities gt Direct computation usually infeasible AA 82 In L090 1 7 71 7 we a 1090 7 Eel 800806 gt Observed Hessian A 71 AA 1 8 L 0 we 7 i1 8080 gt Outer product BHHH Maximum likelihood estimation Hypothesis testing 3 asymptotically equivalent ways of testing H0 R0 q j restrictions gt Wald test Estimate model get test whether R i q O gt Idea behind t F tests gt Likelihood ratio LR test Estimate unrestricted model get Estimate restricted model ie impose R0 q get restricted mle Test whether ln 7 ln is different from 0 gt Lagrange multiplier LM test Estimate model under null get 5 test whether 0 gt BreuschPagan BreuschGodfrey Maxwmum hkehhood esumauon Wald test gt Since ak 0 a MD V WW7 R0 a MD RVR Under H0 R0 q and W NW9 q R7R 1R q X20 Lou L a Leg L G L00 La 390 Log L In Maximum hkehhood estimation Likelihood Ratio test gt Restricted model can39t yield higher likelihood In L09 7 In L09 2 0 Under H0 LR 2In L 7 In L x20 LoaLt Loauii my L Log L Eu Fl39gllrs 1 The Ukslihood Ratio Test Maximum likelihood estimation Lagrange Multiplier score test gt Consider the constrained optimization problem maXln L09 l R0 7 q Regress a column of 139s on get the R2 from this auxiliary regression LM nR2 N X20 FIgue a The Lagrange Multiplier Test Generaiized Least Squares Autocorreiation Generalized Least Squares Autocorrelation ECO 5314 Dr Peter M Summers Texas Tech University October 1 2008 Generalized Least Squares Autocorrelation General error covariance matrix gt As with heteroskedasticity the presence of autocorrelation or serial correlation means OLS isn39t BLUE gt Also as before options are 1 Find an estimator that is BLUE ie GLS 2 Correct robustify the OLS standard errors 3 Investigate other possible sources of mis specification Generahzed Least Squares Autocorre amon First order autocorrelation Model Yr Xm 6t 6t Peril Vt where p lt 1 and Vt N iid0702 independent of ef Vj Generaiized Least Squares Autocorreiation First order autocorrelation First and second moments of E1 Est pEeFl Evt 0 Varet 02 Varet1 i 02 02 17102 Results follow from assumption of stationarity gt mean amp variance of stationary series don39t depend on time gt requires ipi lt1 gt covariance doesn39t depend on time but may depend on distance between observations Generaiized Least Squares Autocorreiauon First order autocorrelation Covariance Covet7 E1371 Eetet1 EKPEtA Vt5t71 PEegA 01 p2 Similarly Covet7 ekk pk 0 1702 Generahzed Least Squares Autocorre amon First order autocorrelation So the covariance matrix of the 65 is 1 p p 02 p 1 p 029 2 p2 p 1 170 LpTil pT12 pT13 where T is the sample size Generalized Least Squares Autocorrelation GLS Estimator gt As before we can find a square root matrix P such that P P Q l homework gt Orjust consider quasi differencing yr 7 Wm X pXLlW 6t peril X PXlAW Vt Y Xf Vt gt Error term in transformed model is spherical Evtvt 021 so OLS on this model is BLUE gt Not exactly BLUE since we lose 15t observation Generaiized Least Squares Autocorreiation GLS Estimator gt Cochrane Orcutt transformation use quasi difference as above ignore first observation gt Prais Winston transformation quasi difference observations 2 7 T adjust first obs by yf 17 pzyl etc Generalized Least Squares Autocorrelation EGLS Estimation of p A 7 T 2 1 T Ol S P 21528t71 Zt2 etetil gt Not BLUE lose first obs but easy gt Iterative Cochrane Orcutt gt Estimate 3 by OLS Estimate using residuals Re estimate B by EGLS using f3 Re estimate using new residuals Stop when changes are small Generalized Least Squares Autocorrelation Testing for autocorrelation Durbin Watson test T d 21328 91 z 2 7 23 Zr1 9t gt Valid ifX not stochastic so no lagged y39s and if X contains an intercept gt Distribution under H0 p 0 depends on T and k gt Tabulated with upper amp lower bounds dL dait dU May be inconclusive gt Requires GaussMarkov assumptions and Normal errors Generalized Least Squares Autocorrelation Testing for autocorrelation Breusch Godfrey test H0 p 0 vs H1 ARp orMAp gt Regress residuals on X and p lagged values of residuals get R3 gt Test statistic is LM TRO2 gt H0 true LM N X2p Generalized Least Squares Autocorrelation Testing for autocorrelation Box Pierce and Ljung Q test H0 p 0 vs correlation of order p gt Compute first p sample correlations from residuals rj Zrj1 etet jZt1 etz gt Box Pierce test statistic is Q 7 251 rj2 2 7 p r gt LJung s test stat IS Q 7 TT 2 j1 TL gt H0 true Q7 Q N X2p gt Q gives better approximation in moderate samples Generaiized Least Squares Autocorreiation Testing for autocorrelation What if ypl is a regressor gt Breusch Godfrey LM gt Ljung Box Pierce 62 gt Durbin39s h h paTle Tsz where 6 is sample correlation of residuals and 53 is estimated variance of coefficient on yt1 gt H0 true a h N N01 Generahzed Least Squares Autocorre atron Robust estimation with OLS gt Assume gt Exe O gt Eee5 O for s H7 H 17 gt Then OLS is consistent and we can use the Newey West standard errors V3713 X X 1TX X 1 where H 1 T T 1 1 5 Z efxtx W Z eseS sz ij 5544 t1 39 11 sJ1 and Wj17jH Muitipie imear regression Multiple linear regression ECO 5314 Dr Peter M Summers Texas Tech University September 9 2008 Mu mp e mear regresswon gt Verbeek ch 2 gt Greene chs 2 7 Multiple linear regression The k variable linear model We39re considering linear models of the form Yi Xi Eii 0quot y X e Linear39 here means linear function of 339 not necessarily linear relationship lny 1 zx y 1 zx zxz Multiple linear regression The Gauss Markov assumptions Apart from linearity and rankX k we assume the following 93 Ee 0 VI 6 and X are independent Vare 02 Vi Cove7 9 0 Vi j Muitipie imear regression The Gauss Markov assumptions gt Assumptions 3 amp 4 imply Vare 02 also known as spherical disturbances gt Vare 02 Vi homoskeclasticity gt COV639 Ej 0 Vi a j no serial correlation gt Assumption 2 means none of the X39s has any effect on any of the e s gt EeiX O and VareiX 02 gt X39s can be deterministic or stochastic Muitipie imear regression The Gauss Markov assumptions Under assumptions 1 4 the Gauss Markov assumptions gt The OLS estimator 3 X X 1X y is unbiased B gt Vern 02X X 1 gt Gauss Markov theorem 3 has minimumAvariance among the class of all linear unbiased estimators is BLUE If B is any other linear unbiased estimator then vamp 2 varm 02X X 1 Muitipie hnear regression Small sample properties of OLS estimators We don39t know 02 so we need an unbiased estimator for it 1 n 5275 e2 n7k 11 So V343 52X X 1 For the kth coefficient we have Var k szckk where ckk is the kth diagonal element of X X 1 If we have spherical disturbances then se k s ckk is the standard error of Bk needed for statistical inference Muitipie hnear regression Small sample properties of OLS estimators Variation in the X39s is good Suppose Yi 51 52X 53Xi3 6 Then 71 A 0392 n 2 V3452 m 2109 Xz where r23 is the correlation between X2 and X3 Muitipie imear regression Small sample properties of OLS estimators Small sample inference requires an assumption about the distribution of 6 Most common is Normality e N N0a2 Vi e N N0702I In that case 3N NB7UZX X 1 Multiple linear regression Small sample properties of OLS estimators gt Gauss Markov assumptions and Normally distributed errors means that for an individual coefficient Bk k k 039 Ckk Zk is a standard Normal variable ie z N NO7 gt Replacing a by 5 means that this isn39t true any more However Bk Bk fk 7 51Ckk has a student t distribution with N 7 k degrees of freedom Mu mp e mear regresswon Multiple linear regression Hypothesis testing General approach to hypothesis testing V V V State null and alternative hypotheses Compute a test statistic whose distribution is of a known form under the null hypothesis ie if the null is true Specify a significance level for the test Construct a confidence interval compare the test statistic to the appropriate critical values andor compute the p value for the test Draw conclusions about the likelihood of the null hypothesis being true Multiple linear regression Hypothesis testing Example Wages and gender wage B1 32 x male l e gt Null hypothesis H0 32 O gt Alternative H1 g 7 O 2 sided gt Alternative H1 g 2 O l sided Muitipie imear regression Hypothesis testing Example Wages and gender wage 31 32 x male e gt Test statistic is 1 2 52 i 53 t2 sew gt 330t2tN72 gt Significance level 04 5 most commonly used gt Compare t2 to tN7 27042 for 2 sided test or tN 7 204 for 1 sided test gt t3292 is NO1 so use 20025 196 or 2005 1645 Mu mp e hnear regresswon Hypothesis testing Example Wages and gender wage 1 g x male e gt Reject H0 g O in favor of H1 g 7 0 if M gt 196 gt Reject H0 g O in favor of H1 g 2 0 if 2 gt 1645 Multiple linear regression Hypothesis testing Example Wages and gender wage 31 l g x male l e gt Since t2 N tN7 k under the null then with probability 17 Oz 7tN 7 ka2 g t2 g 7tN 7 ka2 tN7 19042 g 65 7tN7 19042 or 92 7 tN 7 ka2se52 52 g 92 tN 7 k a2se g gt This is the 17 a confidence interval Multiple linear regression Goodness of fit Example Wages and gender wage 31 l g x male l e gt Given that g is significantly different from 0 orjust significant how well is the variation in wages explained by gender differences gt Consider y X e The total variation in y is 291y 7 y T55 total sum of squares gt The model predicts y using y X3 The variation in y is 2107 7 y E55 explained sum of squares gt A measure of goodness of fit is E55 2 RTT55 Multiple linear regression Good ness of fit Recall that OLS splits y into two orthogonal pieces 7 X3 and E e Since these are orthogonal we can write Vary Varf l Vare so that 7 Vare Varf 7 7 55R 7 T55 R21 where 55R is the sum of squared residuals Muitipie iinear regression Good ness of fit Interpretation of R2 gt Fraction of variance of y explained by X gt Measure of quality of linear approximation gt Relationship between variation in y and variation in fl Liy 7 m i m2 2 corr 7A 2 R y y 291yi7y2gtlt9391yp2gt Muitipie iinear regression Good ness of fit Other things equal higher values for R2 are better But gt Sometimes R2 02 is high R2 09 is low gt Trying to maximize R2 can be bad practice gt R2 can39t fall when a new variable is added Adjusted R2 2 1Nekiztle R 17f 1N 1Zi1 i 2 N71R NikTSS Multiple linear regression Good ness of fit Adjusted R2 gt Penalty term for additional variables N 71N 7 k gt 55R must fall enough R2 must increase enough to offset penalty V R2 will increase if and only if ltk l gt1 Multiple linear regression More general hypothesis testing Suppose we want to test whether some subset of the 339s say the last J of them are all equal to zero H03 kij1 quot39 k71 k0 against the alternative that at least one of these is not zero We can estimate both models and compare the change in 55R gt 55R from the larger model will be lower gt if H0 is true the change will be small Multiple linear regression More general hypothesis testing Two results from probability theory See appendix BY in Verbeek or appendix 842 in Greene gt Result 1 If 2 is a j x 1 vector of independent standard Normal random variables then X z z 2L1 2 has a chi squared distribution with J degrees of freedom X N X2J If the variance of the 239s is 02 rather than 1 then Xa2 N X2J gt Result 2 If X1 and X2 are independent chi squared random variables with n1 and n2 degrees of freedom then X1n1 XQng f has an F distribution with n1 and n2 degrees of freedom f N Fn1 n2 Multiple linear regression More general hypothesis testing Let 0 and 51 be the sums of squared residuals 55R from the restricted and full models so 50 is from the model with the last J coefficients equal to 0 With the Gauss Markov assumptions and Normal residuals if H0 is true then so 7 1 2 XZU 039 Also the variance estimator s2 has the property that Niks2 S L497meu 039 039 Multiple linear regression More general hypothesis testing Finally 50 7 1 is independent of 52 Therefore if H0 is true 50 i 51J F N F J7 N 7 k sMNeu Large values of F mean that the unrestricted model fits the data significantly better than the restricted model ie 51 is substantially smaller than 0 This would be evidence against H0 We can also rewrite F in terms of the RZ39s from the two models R12 R J Fuemvmeu Muitipie imear regression More general hypothesis testing Suppose now that we have a set ofJ different linear restrictions on the coefficients 3 We can write these restrictions as R q7 where R is a J x k matrix and q is a J x 1 vector For example the restrictions 32 33 Bk O and g 33 can be put in this form with Multiple linear regression More general hypothesis testing Testing H0 RB qu H1R q gt Could proceed as before estimate both models and use an F test to compare RZ39s gt An alternative only requiring 1 model is the Wald test gt Another result where 72XX 1 Muitipie imear regression More general hypothesis testing Testing H0 RB q vs H1R 7 q gt Under H0 ie if the restrictions are valid the quadratic form W 7 R3 7 t7 RV AR 1Rf9 7 q is distributed X2J Use vm 7200071 W R3 7 qR02XX 1R 1RBA7 q R3 7 q RX X 1R 1R3 7 q 2 039 W is the Wald statistic Muitipie imear regression More general hypothesis testing We can39t use W directly because we don39t know 02 But consider WU2 J 52 G 0 G 1 H R3 7 q 02RX X 1R 1R3 7 qJ N 7 k5202N 7 k F Muitipie imear regression More general hypothesis testing F is the ratio of two independent X2 random variables each divided by its degrees of freedom Simplifying a bit we get R3 7 q 52RX X 1R 1R3 7 q J N F FJNik

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.