Actuarial Statistics MATH 3621
Popular in Course
verified elite notetaker
Popular in Mathematics (M)
verified elite notetaker
This 12 page Class Notes was uploaded by Mary Veum on Thursday September 17, 2015. The Class Notes belongs to MATH 3621 at University of Connecticut taught by Emiliano Valdez in Fall. Since its upload, it has received 60 views. For similar materials see /class/205812/math-3621-university-of-connecticut in Mathematics (M) at University of Connecticut.
Reviews for Actuarial Statistics
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 09/17/15
Multiple Regression Model and Esti m ation Math 3621 Applied Actuarial Statistics Fall 2009 semester EA Valdez University of Connecticut Storrs Lecture Weeks 67 Mlllble eglesslon Model and Esllmatlon EAth introduction r lmerpremg m msnn mums Mded VEIEDle plots What an new me on mm mm plwsv Mditional case may emam mm me mast The observable data mn m sbm Estimation 0 Assume our observed data set consists of m QonXk Yi tori 12n o n is total number of observations mmm o Xi is associated with the intercept term and is usually 1 W quotquot and test swam estimam the rat lawmanquot matrix 0 k is the number of explanatory variables 0 Define the vector of responses Y and matrix of mm explanatory variables X as intmmms Yi X10 X11 X12 Xik MWMMS mmmmm Y2 X20 X21 X22 ng lemme quotg Y and X I nglc gmimyg 3 39 imwntmtmmn Mom Yn Xno Xm Xquot X ik Mdedvambiepbts immatequ mmmsmu plum Mditionai case study Demammtemvine new The regression model in matrix form 0 Define the vector of regression coefficients and the vector of errors as 50 51 t 52 and 5 t an o In matrix form the regression model partitions the response into a systematic X5 and a random component 5 as follows Y X a where E5 0 and Vare 02L 0 As a consequence we have EY x5 and VarY 02L 0 Note that l refers to the identity matrix of suitable dimension maple Regression Model and Estimation an introduction my n H mm rm swam estimam the rat lawmanquot matrix lmerpremg m msnn commas Mded vamble plots Win an M7 flown on mm mm plum Mdiiional case may Demam mm m we a Specific individual observation 0 For a specific observation 139 define the row vector of observed explanatory variables by XiX07Xn7Xik 0 Thus we see that the regression model forthis specific observation can be written as Yr Xi r Millble Regression Model and Estimation lmerpremg m msnn mums Maw vamble plots Wm are M7 me on mm mm plum Mditional case may Demam mm m mas o Least squares estimates 0 The least squares estimates of denoted b minimizes the sum of squares 555 5 5 Y 7 mm 7 X 0 Note that there are k 1 parameters to estimate including the intercept o Differentiating and then setting to zero we have the normal equations X Xb X Y where b is the least squares vector 0 Provided X X is invertible we have b x xr x v Milible Regression Model and Estimation m lnlroductlon eswi W hat lawmanquot matrix rme Gus Mafm lmrem Same 91mm mm lmerpremg m msnn Mme Mded vamble plots WM are M7 me on mm mm plum Mdltlonal case may Demam mm m was 5 The hat or projection matrix 0 Definethe hatmatrix H XX XquotX which gives the orthogonal projection of Y onto the space spanned by X See Figure 21 Faraway 2005 o This matrix is useful for theoretical manipulations like 0 Fitted values V HY Xb a Residuals E Y e I e HY 0 Residual sum of squares RSS SE Yl 7 HY a Note thatthe RSS is also called the error sum of squares Error SS unripe Regression Model and Estimation EA VIBE m introduction the nssnn mmel um i we ew prmm unl matrix lmerpremg m msnn Mme Mded valable plots Win an M7 me on mm mm plum Mditional case may Demam mm m was s Properties of the parameter estimates 0 Unbiased estimates Eb B o Variancecovariance matrix Varb 02x xr 0 Estimate for 02 Error SS 2 7 s ErrorMS n7k1 0 Standard error for a particular component of b sebri s X X fori12k1 Millble Regression Model and Estimation HE introduction m nssnn mmei rm swam estimam m rm impmmn matrix imerpremg the msnn Myriam Mded VEIEDie pbts WM are M7 me on ma mam pw Mdmonai case may Demam mm in was 7 GaussMarkov Theorem 23235 Estimation 0 There are some reasons why the least squares estimates m b are good estimates for mroductlon l a Geometrically itdoes makes sense because it results from m quotquot temwmmm an orthogonal projection onto the linear space lmtgm39mlmvmmquot 0 These least squares estimates are equivalent to maximum gmmwm likelihood estimates in the case where the errors are iid Samwmmww normally distributed Mexampler 0 According to the GaussMarkov theorem the least squares jfjml cms estimates are Best Linear Unbiased Estimates BLUE innmmmwss RWMWM 0 Details of proof of the GaussMarkov Theorem Will be Siting5me provided in lectures lemma lmerpremg m msnn Mme Mded valable plots Wm an M7 Hwtn on mm mm plum Mdlllonal case study Demam mm me new The ANOVA table Mlig mg sbm Estimation 0 Very similar to what was done in the simple linear m regression we can decompose the total variability into variability due to the regression and variability due to the WWquot error m nssnn mmel rm swam estimam W hat lawmanquot matrix 0 This decomposition allows us to account for these variabilities into an ANOVA table The ANOVA Table g ms source sumolsquares di mean square M Hallo P Mummies MS mmsz regression regressionSS K regressionMS i ege fims Prelimim yvsmlamms error errorSS nrKi errorMS 733133 total total 88 nil mmmmum m ms interpnmg m msnn Mme Mded vamble plots Win an M7 me on mm mm plum Mditional case may Demam mm me was a Some goodness of fit measures a The proportion of variability still just like the simple linear regression model explained by the regression model is 2 7 Regression SS 7 ELM 72 Total SS 2 Y 7 Y2 o This is also called the coefficient of determination F 0 When an explanatory variable is added to the regression model unfortunately this 92 never decreases o The adjusted FF defined by R 717 ErrorSSn7k1 717572 37 Total SSn71 7 sgp provides for the proportion of the variation explained by the regression but adjusted for the number of predictor variables or degrees of freedom maple Regression Model and Estimation introduction 93 5 lmerpremg the msnn mammals Maw vamble plots Wm are M7 wan on mm mm plum Mditional case may De am mm m we in Morton Lane s study of catastrophic bonds 9 Published in ASTIN BUlletin Vol 30 Year 2000 pp 259293 0 Lane fitted regression models to help explain the pricing of risk transfer in the catastrophic bond market 0 CAT bonds refer to securities that provide for coupon payments and principal based on the aggregate losses of a portfolio of insurance contracts 0 CAT bonds are meant to provide insurance companies a way to manage catastrophic insurance risks and atthe same time investors who wish to have the opportunity to profit from the transfer of insurance risks 3 Lane consider 16 catastrophic bond issues made in 1999 maple Regression Model and Estimation introduction the rat lawmanquot matrix rme Gus Makwiimnm lmerpnmg m msnn Mme Mded vamble plots Win an new flown on mm mm plum Mdltlonal case may De am mm m was i i