### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# TIME SERIES ANALYSIS STA 6857

UF

GPA 3.83

### View Full Document

## 19

## 0

## Popular in Course

## Popular in Statistics

This 109 page Class Notes was uploaded by Golden Bernhard on Friday September 18, 2015. The Class Notes belongs to STA 6857 at University of Florida taught by Arthur Berg in Fall. Since its upload, it has received 19 views. For similar materials see /class/206568/sta-6857-university-of-florida in Statistics at University of Florida.

## Popular in Statistics

## Reviews for TIME SERIES ANALYSIS

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/18/15

857 Regress3910n with Autocorrelater39l Errors 55 APPLICATION OF LEAST SQUARES REGRESSION O RELATIONSHIPS CONTAINING AUTO RELATED ERROR TERMS D n 0mm WWW 0139 AWE Ewn mmv CW39MJW We point out um amountrelated error terms require modi lm Fag mm Q va acumen of the usual methost of escummun and prfzdmmou m 39 g q 3 a gr in most current formulmions of economic relations are highly 3mg Ell satn niilld w pnsitively uutocon elnted In doing this we demunscmcu um when estimates of autoregressive properties of em 1e 1 e a ediction and We give a tentative mechnd m procedure to regaining the lost e i ciency E a E E E 2 n a 38 aE E Z 5 395 n I 20 0 Regression with Autocorrelated Errors 0 Regression with Autocorrelated Errors The P10 lem Consider the typical regression model 5 alt X where x is a process with covariance function 7s f The matrix formulation is y Z x wherex 3617 7ct y y17yn andZ Z17 znl When the residual process x is uncorrelated then standard theory applies producing the optimal estimator of i to be 3 z zf1 z y C on elated Enors However if the process x has correlation ie when P W t t1 is not a constant multiple of the identity matrix then the usual estimator 3 is no longer optimal In this case the optimal estimator is the generalized least squares solution Given the covariance matrix D it is possible to nd a transformation matrix A such that APA I where I denotes the identity matrix so in particular A A A l I l Generalized Least Squares Multiplying the matrix equation y Z x by A yields the equation AyAZ Ax U w where U AZ and w is a white noise vector with covariance matrix I Applying the usual estimator of 8 in the transformed equation gives 5 U U 1UAy Z A AZ Z A Ay ZF71zrlzF71y I l roaches Without knowing P 39ys 7 1 231 we are at a loss Two approaches to estimating P are provided and both assume stationarity of the process xt Nonparametric Approach In this approach P is estimated by f W 7 t r1 vht1 However yh is poorly estimated when It is large and modi cations are required Parametric Approach This approach is due to Cochrane and Orcutt 1949 and involves tting an ARMA model to the residuals then regressing a transformed model Suppose then the transformation MB gtB my 5 0Bztwt u v produces a least square model with uncorrelated errors ie the model Mt ivt Wt where W is white noise Cochrane and Orcutt 1949 0 Obtain the residuals 2 y 7 zt Via the usual least squares routine 9 Fit an ARMA model to the residuals 56 say amt Bwt 9 Apply the ARMA transformation to both sides of linear model ie compute 09 93 3ltBgt My Mt y and v z 0 Run ordinary least squares regression on the transformed regression model ie compute 6 V V 1V u where I l n 1a 2a 7a 2n 2a m an an mu 1970 1972 1915 1975 1973 1950 Temperalure mm 972 mu ms 973 1950 Panlculales mm 972 1975 1975 1913 19m Scatterplot Matrix gt palrscb1ndmort temp part 5n an m an an mu Four Models Data from 1970 to 1979 in LA 0 M t represents the cardiac mortality 0 T represents the temperatures 0 Pt represents the particulate pollution Four proposed models 0 Mt o ltwt 9 Mt50 lt52TtT Wt 0 Mt 50 B1t 52Tt T 53Tt T2 wt 0 M 60 1t 62Tt 7 T 63m 7 T2 64 w gt temp tempemean temp gt temp2 empAZ gt trend timemort gt fit lmmort trend temp temp2 part naaCtionNULL fit gt summary Call 1mformu1a mort trend temp temp2 part naaction NULL Residuals Min 10 Median 30 X 190760 42153 04878 37435 292448 Coefficients Estimate std Error t value Prgtt Intercept 2831e03 l996e02 1419 lt 2eel6 Ml trend 1396e00 1010e01 1382 lt 2e716 temp 4725e01 3162e02 1494 lt 2e16 tempZ 2259e02 2827e03 799 926e715 part 2554e01 1886e02 1354 lt 2e16 Signif Codes 0 222 0001 22 001 M 005 01 1 Residual standard error 6385 on 503 degrees of freedom Multiple ReSquared 05954 Adjusted Risquared 05922 Fistatistic 185 on 4 and 503 DF pevalue lt 2 l6 I l h T E h T W m E c Q o 5 N lt o E r to E II I III I I I V II I I I I I I 9 I I 00 01 02 03 u4 D5 m o b lt 3 i l 7 I III 0 lllllll 0 I v u v I I I STA 6857 Regression with Autocorrelated Errors amp Transfer Function Modeling 55 amp 56 at nlzhm w 0 Presentation Times 9 Regression with Autooorrelated Errors 9 Transfer Function Modeling n mm m Tim4 ACF and PACF of Residuals gt acff1tresld gt pacff1tresld ACF Partial ACF in o 0 102 03 ACF and PACF of Residuals gt f1t2ltrarolsf1tSresld alcE order2 arolsx fltSresid aic F ordermax 2 Coefflclents 1 2 02205 03625 Intercept 70002895 02472 Order selected 2 SlgmaAZ estlmated as 3092 S mmary of Old Flt gt Mor eerrlteumort c1e 205e3025srdes1 02 gt Trendeerrlteutrend c1e2205e3025srdes 39502 gt er c1 2205 3025srdes1 02 gt Tenp2lterrltertemp2 c1e 2205e 3025srdes135021 gt perteerrlteupert c1e2205e3025srdes1 02 gt e nt3 1mMort Trend Temp Teup2 pert all Jmformula Mort Trend Temp Temp2 pert Coefncrents Intercept Trend Temp Teup2 pert 3423550 e002777 e019010 001070 022901 me Summary of New Fit gt summermrrta eene mtormule Mort N Trend Temp Temp2 pert Resrduels Mrn 1Q Medren SQ Mex e174250 44915 03200 30912 179007 Coerrrcrentse t mete std nrror t Value Prmtn 0072217 51222 lt 2ee10 0 0002210 7522 100ee13 1 0022529 10132 lt 2ee10 1 Srgnrf codes 0 mm 0001 4004 001 m 005 3939 01 r r 1 Resrduel stenderd error 5221 on 501 degrees of freedom Multrple Resduered 03002 Adjusted Resduere 03012 gt summery mt Call lmfomula mort trend temp temp2 pert neectron NULL Resrduels Men 10 Medren 30 Max e190700 e42153 e04272 37435 292442 Coefncrents nstmuete std nrror t Value Prgtt Intercept 21592232 1102142 7403 s 2ee10 1 trend e0020244 0001942 e1322 lt 2ee10 1n temp e0472409 0031022 e1494 lt 2ee10 1n temp2 0022522 0002227 799 920ee15 1n pert 0255350 0012257 1354 s 2ee10 1n Srgnrf codes 040m 0001rm 001m 005m 01r r 1 Resrduel stenderd error 0325 on 503 degrees of freedom Multrple Resduere 05954 Adjusted Resduere 05922 mm n gt111 ACF and PACF of Residuals gt acf rrt3sresrd1wd3 gt pacff1t3re51d dee3 ACF 04 08 00 O10 Fame ACF O 00 m illrne Alternative Method gls gt librarynlme gt fitgls Trawler Fririiglli 39 i l l Transfer Functi glsmorttrend temp temp2 part correlationcorARMAp Generalized least squares fit by maximum likelihood gofEl NEEit N trend l temp l tempz l part This is atime domain solution tothe lagged regression problem we a a Logilikelihood 715419037 considered earlier using spectral analysis Coefficients It Is often easier to fit a transfer function model In the spectral domain as Intercept trend temp t mp2 part before than In the tIme domain 8763747477 7002915079 7001880909 00l542466 0l5437383 Recall the problem We wish to find the filter 043 such that Correlation Structure ARMA20 Formula Nl 00 Parameter estimate s yt Z Oletij l Tit 043 l Tit Phil Phi2 0 03848530 04326282 Degrees of freedom 508 total 503 residual Residual standard error 7699336 ACF plot of residuals showed strong correlation however so may need some tweaking Arthur Berg Arthur Berg Box and Jenkings considered the following model for a 7 5BBd 043 7 7MB where wB17w1Biw232 7 funB and 636061B6SBS Estimation of the parameters is somewhat involved and you are referred to the text for the details STA Gan Estimation 36 0 YuleWalker Least S uares 9 q 9 Maximum Likelihood Outline o YuleWalker The YuleWalker equations zgive us a means of estimating the coe icients 431 432 45 in an ARp model George Udny Yule 18711951 is a Scottish statistician Who taught at Cambridge University The Yule distribution a discrete power law is named after him Equations quotEvery cell phone call solves the Yule Walker equations every ten microsecondsquot 7 Thierry Dutoit Sir Gilbert Thomas Walker 1868 1958 was a British physicist and statistician known for his description of the Southern Oscillation cum Iman Start with the mean zero ARp model xt 1xt71 2xt72 pxt7p Wt H Multiply both sides of 90 by th for h 17 7p xtxtih 1xt71xtih 2xt72xtih pxtipxtih thtih Take expectations throughout Yh gt1 Yh 1 gt2 Yh 2 p 7h17 1 Now take the expectation of with h 0 WW 1v1 2v2 gtpv17 130 H 2 Rearranging 9lt gives 0 WW gt1v1 pv19 2 Equations 1 and 2 are the YuleWalker Equations I l L Y W Equations in Matrix Form From the recurrence Wt gtwh1 gt2vh2 gtpvhip 1 for extract the 1 equations WU gtwlt0gt 2v1 gtpv19 1 vlt2gt gtwlt1gt 2v0 gtpv19 2 v09 gtwltp 71H gt2 YP 7 2 gtpv0 p71 W772 v0 HF pp p 00 lt75 Hence Pp 3971 where Pp 39yk ij1ik1 Estimation Using the YW Equations Under the method of moments approach we estimate 11 12 7 with g PApil Yp and subsequently 32 30 7 gt l 7 239 7 7 303 ww 1 A Mm nww mmum wx I l Other Estimation Me a Burg s Method 0 Method of Moments Example 327 p 125 0 Least Squares o Conditional Maximum Likelihood a Maximum Likelihood Least S uares 9 q eating Autoregressi as Reqress g For the mean zero ARp model xt 1xt71 2xt72 H pxt7p Wt we can regress xt1xt2 xtiv on x by writing the data xt1 as 7L 36 7xt717xt727 397xt7p V x I quot tp1 Now we can perform least squares regression of x on y 9 Maximum Likelihood Simple MLE Example Letx17x27 xn N deQL 1 o MLE Maximum Likelihood Estimator o UMVUE Uniformly Minimum Variance Unbiased Estimator o admissible Can t be beaten uniformly Maximize Lu Lulx fMx where n n 1 Kr02 1 71 n 2 f x f xi e z lt7gtETZZ1ltwgt A gt i1tlt gt L 7W 2W Maximizing fMx with respect to u amounts to minimizing Simple MLE Example u To minimize gm set its derivative equal to zero ie 7L g W 409 7 M 72m 27W 0 i1 Therefore we see u 5c minimizes gm thus maximizes the likelihood Lu Hence the maximum likelihood estimator of u is 56 Assumption in this derivation independence MLE for AR1 For the AR1 process x gtxt1 W where w wd N0 02 we have the likelihood equation L 7 7 chfltxtixt717xt727 397x1 t1 fx1fx2 i361 39fxn W71 Note that x xt1 N gtxt1 W so that fltxtixt71 fwxt 173571 wherefW is the pdf of 3907 02 Hence from 90 we have L 7 0 fx1fwxt xtil t2 I l MLE for AR 1 cc From the causal representation of x1 00 x1 E WM17139 j0 we see that x1 is normally distributed with mean zero and variance 0 00 U2 2 2 varx1 E var gt w1j a E j J j0 j0 17 2 Therefore the Likelihood can written as 7 S gt L 22n217212 7 W m lt ltz exp 202 where n SQ 1 5ZJX1H2 05 Wkly t2 S gt is called the uncanditianal sum 0f squares I l MLE for AR 1 cc Take the partial derivative of L gt7 a with respect to 02 to see maximizes the likelihood Therefore A2 S ML UML After taking logs we see that the MLE of j is the minimizer of W ln i M n Numerical optimization must be performed here ie no closed form solution of gtML exists I l C ditional MLE of AR 1 The problem becomes much simpler if we condition on x1 namely the likelihood becomes L 7 7 fx2lx1 fxnlxnil waltxt xtil t2 2m2w2exp 7 where 7L 95 1735702 t2 Minimizing S gt over 1 yields the conditional MLE which is also the least squares estimate I l Next Time o Asymptotics o Bootstrap o ARIMA introduction 37 RMA WWIIY ARIMALD 3121 mm mm AR pmmmrm Long Memory Tlme Serles n w I Wil Eredo Palma 7 20 40 me 60 SO 10 Series simFGNIJ1IJIJIJIJ8 H HHHHHMMM x x x x x 1D 15 Lag 20 25 30 quotEvery ng Wlley Smes in Probahlily and Smustks quotEinstein Si nal Extraction and O timal Filterin g P g 9 Long Memory ARMA and Fractional Differencing Outline Si nal Extraction and O Limal Filterin g P g Signal Extracti Consider again the lagged regression model 00 y Z rxtir Vt r7oo But this time we knaw the coef cients 3 and we want to estimate the input signal xtg ie we seek the lter a which optimizes the estimate 00 XI arytir r7oo We shall assume x and y are jointly stationary We wish to minimize the following mean square error 00 2 E XI 7 Z arytirgt r7oo I l ality Principle Revisited Again we start with the normal equations 00 M i Z arYt7rgt his 0 r7oo for s 07 i1 i2 which leads to E Z ar39YyS 7 r V131 r7oo Proceeding just as before we deduce AW yw fxyw where Aw and a are Fourier transform pairs However we don t know 36 hence we don t know y u I l Noise Ratio SNR Using the additional observation exercise fxyw WW leads to another formulation of Aw as W mm P 583 Although we don t know jaw the quantity cw fvw is referred to as the signaltonoise ratio which can be estimated cf Chapter 7 Therefore given the SNR the optimal lter a can be estimated from the inverse Fourier transform of A w where Aw AM m HEMP SEEM I l 9 Long Memory ARMA and Fractional Differencing Introducti Long memory time series provide an intermediate comprimize between the ARMA models and the nonstationary ARlMA models Long memory processes occur in hydrology e g long term storage capacity of reservois 7 Hurst 1951 environmental series eg varve data econometrics e g certain squared returns and interest rates Why ARMA won t do The causal ARMAp7 q model gtBxt 0Bwt with the representation 00 x E i jwtij F0 has coef cients 14 that decay exponentially fast Hence the acf ph decays exponentially fast However there are many stationary time series that have slowly ie nonexponentially decaying autocovariance functions Originally when we see this ACF We would rst difference the time series Originally when we see this ACF We would rst difference the time series Not this time I l Fractional Diiier encing Levels of differencing No differencing 1 7 B0xt Full differencing 1 7 B1xt Fractional differencing 1 7 Bdxt where d may be between 0 and 1 This is how 173 17dBwBZ7WB3mxI Taylor series expansion ary Fractionally Differenced Series The fractionally differenced series 1 7 Bdxt W is stationary only when ldl lt 12 For d in this range we write the Taylor series expansion of W as CO 00 w 1 7 Bdxt Z 7erxt Z 7rjxt f0 f0 where PU d W W and also 00 00 x 1 7 B dwt ijBwt ZwJwt j0 j0 where PU d 139 r0 1Pd I l CF 0t Fractionally Differenced Series Using the causal representation of 36 we have Uh dP1 d 2d71 W Phid11 d h Therefore for 0 lt d lt 5 This indicates x has a long memory hnl Fll Using Fractional Differencing Consider the random walk model xt 3671 Wt which is nonstationary but stationary after differencing But maybe the time series on hand looks like a random walk but is actually stationary with a long memoryr Then incorporating fractional differencing may be in order ARFllVlA Models De nition ARFlMA Model The model gtBVdxr M 0BWr where d may be any real number integer or not de nes the ARFIMA model which is stationary for 712 lt d lt 12 Two useful packages in R o longmemo o fracdiff Here s how you can simulate data from an ARFMA141 model gt librarylongmemo gt arimasim100 model listar9 ma l innov simARMAO 150H4l2 n5tart 50 Here s a time series plot of a simple fractional Gaussian noise gt plot simFGNO 2000 8 lwd3 1 lirll M lyllMA39llI3 quot Here s an example estimation of d using the fracdiff package gt libraryfracdif scanquotmydataVarvedatU Read 634 items gt lvarve logvarveimeanlogvarve gt Varvefd 7 fracdifflvarve gt Varvefdd nar0 nma0 M30 1 03841688 gt Varvefdstderrordpq l 4589514e706 11 quotF UNIVERSITY of a alman IS a cormol theonst who 5 a quotwlthoutadoubtthe most uen rsearc erm th 5 known WOI ld VNlde for his linear ltermg technique The Kalman lter which revolutionized the eld of realinme anmanon IS under used In a huge range of applicanons E Medal of onor1974l mtmml Medal 19m Pri ngh39l sclmolog 1985 19ml btcclc 39r c Bellman P c 1997 0 Introduction 0 Introduction Basic Model We can observe the signal vector yr but we want x in the model 3 Atxt Vt where the observation matrix A is known and may vary with time t and v is multivariate Gaussian white noise with covariance matrix R Additionally we assume the state vector xr dynamically changes in time via the VAR1 equation xx xtil Wt where w is multivariate Gaussian with covariance matrix Q Simple Example Suppose we observe y Mt Vt where V N N0 1 and the state process In is a random walk given by U39t M71 Wt with W N 3907 1 independent of Vt We wish to determine the values of M given yr Three Different Viewpo1nts Three Scenarios prediction One step ahead predictor W1 i 2 PW ltering Decoding the present state of the signal Miiz smoothing Utilizing past and future observations to uncover the signal at a given time M i 2W I l Prediction Filter 3quot J 5 Smoolller Predinion Slightly More General Consider the model y x Vt where V N N07 03 and XI xt71 Wt with w N 3907 0 and 1 unknown estimate stderr phi 08137623 008060636 sigw 08507863 017528895 sigv 08743968 014293192 G10 al Temperatures We have two different sets of measurements of the global temperatures from landbased observations y and marinebased observations ya If we assume the global temperature follows a random walk then we have the following statespace model lt t1 gt lt 1 gt t lt t1 gt ya 1 Va with xt 3671 Wt 7 7 7 2 With covariance matrices Q 7 varwt and R 7 varvt 7 rijij1 estimate stderr sigw 004991771 001772117 CRll 013834951 001468864 CRZZ 005793300 001009874 CRlZ 004611089 001324303 I l Global Tempemmre R 02 00 0 2 04 06 04 Structural Model Trend and seasonal components is incorporated into the statespace model under the class of structural models Consider the model for the 1amp1 quarterly earnings YtTtStVt where T is the trend and S is the seasonal component Allowing T to increase exponentially suggests the model Tr Tt71 th for some coef cient 1 gt 1 and S can be modeled additively as St 5371 572 573 Wrz Results estimate Phi11 10363707 sigw1 00766130 sigw2 01229859 sigv 07212800 Arthur Berg 13 13 O O O O stderr 002116015 040961114 041235861 063531136 Trend Componenl AQUQA 19111 197 Data points and Tremhieason line 1 1 1 1965 19m 1915 195a Northridge Denali 1077 7 Peru giariana Islands uma ra 10 t noise Frequency 104 to 01 Spectral Approximation Any stationary time series has the approxi ation q Xt 2 Uk cos27rwkt Vk sin27rwkt k1 where Uk and Vk are independent zeromean random variables with variances 0 at distinct frequencies wk The autocovariance function of Xf is exercise 7 wt Z 0 cos27rwkh k1 In particular 7 W0 Z 0i quot k1 Va39Xz V sum of variances Note 39yh is not absolutely summable Le 00 2 MW 00 Arthur Berg Spectral Density 9 Periodogram Arthur Berg Peru R51 gt ml ilVloitivaLng Example Let39s consider the stationary time series Xt Ucos27rwet V sin27rwet where U V are independent zeromean random variables with equal variances 02 We shall assume we gt 0 and from the Nyquist rate we can further assume we lt 12 The quotperiodquot of Xt is 1we ie the process makes we cycles going from Xt t0 Xf1 From the formula eila Gila COSa 7 f we have 2 wt 02 cos27rweh 73 e zwoh 22W Arthur Berg Beggirgegg RiemannStieltjes Integration Integration wrt a Step Function RiemannStieltjes integration enables one to integrate with respect to a general nondecreasing function Usual Integration General Situation fgXdX fgX dFX Case of interest FX is a step function Le a function with a finite number of jump discontinuities Such an FX has the representation Given FX 204100 g X then grx dFrx Zargrxr f7 FX ELY10039 S X i1 So FX has jumps at X with heights 04 Simplest step function is the Heaviside step function given by HX1X gt 0 Arthur Berg Arthur Berg cko our Motivating Example General Theorem We can now write the autocovariance function of Xt UCOS27rwot VSin27rwot The autocovariance function for any stationary time series has the as representation 12 e2 quot hdF 2 rrh r r w 039 ef wgh 227nghgt 712 12 I for a unique monotonically increasing function Fw such that eZwwh 712 Fioo F712 0 and Foo F12 02 Where 0 The function Fw behaves like a CDF for a discrete random variable 2 w lt Two except that Foo 02 y0 More on this later Fw 77 iwo Stu ltwo a wao Fw jumps by 022 at iwo and too Arthur Berg Arthur Berg Approximating Step Functions Any step function can easily be approximated by an absolutely continuous function For an absolutely continuous spectral distribution function a spectral density exists Arthur E yrrl Spectral Density of an ARMAp q Process Let X1 by an ARMAp q process not necessarily causal or invertible satisfying BXt Bwt where w N WNO 02 9 and gt have no ommon zeros and has no zeros on the unit circle Then X1 has the spectral density 2 6 8721M 2 W quot 712gwg12 0 Many people define the spectral density on 77r to 7r but this is just a matter of scaling o The spectral density of white noise is a constant equal to 02 Arthur E yrrl Spectral Density Suppose the autocoVariance function of a stationary process satisfies 2 hhl lt oo h7oo then autocoVariance function has the representation 12 7h ammo h o i1j2 712 where fw is the spectral density Similarly rltwgt Z 7he 2quot hvhr 712 so w h7oo Note x is sufficient but not necessary to guarantee a spectral density The functions 7h and fw form a Fourier transform pair Arthur E yrri H l liLllw Example Spectral Density of an MA2 and an 1 Spectral density of X1 00 02 04 06 08 r AR1 Wr1 Wt Wt713 r r r 00 I 01 OZ pectral density of X1 XH 7 9X2 The Periodogram Discrete Fourier Transform Definition Given data X1 Xn the discrete Fourier transform is defined to be De nition 1 7 The periodogram is defined at Fourierfrequencies to be My where deal W Exteehwt e1 Iran idwi2 loll 0 1 39 39 39 7 n T 1 Where W jn are the Fourierlrequemlesu39 Amazingly via some mathematical calculations we have the identity The discrete Fourier transform is a onetoone transform of the data The nl1 original data can be produced from wj Z MeshWM h7n71 1 n71 i It Xt gdwpe Arthur Elem Arthur Elem Periodogram as an Analysis of Variance 39The cosine and sine transforms are defined as 7 i 7 7 Calculation in R 1 n gtxcl232l deadI WZXtCOS2 w39t gt cl cos 2pisl5l5 t1 gt sl sin2pil5l5 1 7 gt c2 COS2pil525 d5wj 7Zthln2muT gt 52 sin2pil525 t1 gt omegal cbindcl sl gt omega2 cbindc2 s2 gt anoValmxNomegalomega2 if ANOVA Table Then the total variation can be written as for n odd n new2 new2 2 7 2 I 2 7 I Analysis of Variance Table Zoqix 72 Z ldcodf d5w l 72 2 KW Response x t1 1 f1 Df Sum Sq Mean Sq F Value PrgtF Source Omegal 2 274164 137082 Omega2 2 005836 002918 1 2 20 KW Residuals 0 000000 012 2 2002 012 A gt absfftx 25 if the periodogram as a check 002917961 0 02917961 1 37082039 1 1620000000 137082039 wne1 2 2 2wn71 22 wn71 2 n Arthur Elem L mew Limitations of the Periodogram From its structure n71 1 MW 3h6727rlwh h7n71 it looks like a potential estimate of the spectral density given by M 2 new h7co But there39s a problem 2wl MD What we would like is a limit to zero in the right hand side of the above equation Pros Cons and fixes to the periodogram will be discussed next time d 2 eXz Arthur Elem STA 6857 Nonparametric Spectral Estimation 45 A Comparison of Log Window Generators A Barnett me 1 53m A 5151 s 39 5quotquot G 53 39m quot ejer me e 1 a 0 Daniel k399 J D Persian 1 me 1 I an E Palzen 2 me m a F Parzen a kv k G I1key hmming me a 1 05 114 H Tukey hanning 109 54 46 com I TukeyParzen me 433564msx0 J Normal me exp 45939 sitier 7r k ii1 66391 e as 1 21 0I 2 t K Bohman k i1 1 a 00510 L Pmen 4 km e m r a A 1 rl W 4 The Periodogram Definition Given data X1 Xn the discrete Fourier transform is defined to be 1 7 X1e727rlef W g for 01ne 1 where W jn are the Fourier frequencies Definition The periodogram is defined at Fourier frequencies to be Iw where wj l039wjl2 Amazingly via some mathematical calculations we have the identity n71 Z e727riwjh wj Arthur Berg 4f 17 0 From Last Time a Nonparametric Spectral Estimation a Sunspot Numbers Arthur Berg 2f 17 p I Limitations of the Periodogram 7 7 i From its structure n71 Z 3hei27riwjh h7n71 w it looks like a potential estimate of the spectral density given by w Z me wh 7700 But there s a problem 2w i X2 fcur 2 What we would like is a limit to a constant in the right hand side of the above equation Arthur Berg 5f 17 Fro m Last Time Fitmspot l iviurnbers From Lasl Time Surispol i iumber Log Periodogram 3 Series soi v Raw Periodogram D bandwidth0VOO722 NJ 0 I I I I I I 0 I 1 2 3 4 5 6 I I I I I I I g Series rec o The log functIon Is the varIance stabIlIZIng functIon of the Raw Periodogram periodogram the variance of the periodogram at w is proportional LNM bandwidlh000722 O 7 I E Iquot I l I I 0 So frequently the log periodogram is often presented Potential 1 2 freguenlrv 4 5 6 projectlog periodogram regression estimate of a long memory II sIeIrigsIIsoi aw erIo ogram parameter W i 1 2 A i 2 g 3 Serie E 7 Raw Periodogram From Last Time Fitmspot l lurnber From Lael Time SLli39lfSpCll i iurnber ymptotic Properties of tedogram eraing the Periodograi The Periodogram estimates at different Fourier frequencies are Under general conditions of the time series approximately independent So averaging neighboring estimates is the o Bias key to improving the estimate of the spectral density I 1 9 a II V I I bIas KwD EIwj fwj O TI Bias is very small 6 I IIII 0 Variance I ll var w 01 39j ll j il Variance is very large 39 lIn 39 Let s strike a compromise if i 39 it Increase the bias lt gt Decrease the variance 2 L quotI II 5 2399 2 5 39u39 3 v I II Arthur Berg 9 17 Arthur Berg 10 17 From Time Sunspot Numbers From Last Time Sunsth l leirnliters A Closer Loo at thePeriodogram This is where we are n 1 can Z hie WI h n 1 This is where we want to be ma 2 lime W 72 00 One way of looking at the problem h is no good for values of h close to n Arthur Berg 11 17 From Last Time Exaples of Lag Windows may and Mollad Suction quot39quot 39Jfr39equoth 59399 2393 0 3151 1quot239 15m 33quot A Comparison of Log Window Generators 81 A M A Bartlett me 1 611quot 39 39 53 39B quot B Fejer ka 1 3 C Daniell 105 m 179 D Palzen l 70quot5 l I 039 E Panzen 2 k 9 m 9 I an F Parzen 3 k 9 1 0 H 39m c hm D G Tukey 39hamming39 k l cos 1rG V w H Tukey 39hanning k 9 54 46 cos 1r I Tukey Parzen k399 436 564 005 0 J Normal k B exp 459 39 u g e K Bohman 169 1 a coma M 4 w 9 W L Faxzen 4 k 9 1 69391 0 0 s 5 him E k r 21 9f N 92 7 l i 4 Arthur Berg 13 17 a 8 u 1 m awki haw 7 x 2 k1191 Yi9ik3903 J I a l quot 1K03l The Solution Reduce the influence of h at extreme values of h Consider the following estimator n 1 Rea Z Annhgte 2t rih h n 1 where Ah starts out at 1 when n m 0 but then decreases as h increases Arthur Berg 12 17 From Lasl Time Arthur Berg ion From Last Tlme Sir Franz Arthur Friedrich Schuster FRS 1851 1934 Schuster credited with the formulation of the periodogram 0 Arthur Schuster On Lunar and Solar Periodicities of Earthquakes Proceedings of the Royal Society of London Vol 61 1897 pp 455465Avaiable Online Deved 11 once Veany Mam Watier Sunspot Numaer 2 a E E S S a 20 1 700 1750 Arthur Berg 16 17 1 Time inkyroar sctral Esti ma lion oothing the Sunspot Periodogr m quotMr A Schuster of Owens College has ingeniously pointed out that the periods of good vintage in Western Europe have occurred at intervals somewhat approximating to eleven years the average length of the principal sunspot cycle quot William Stanley Jevons f p omossm Arthur Berg 17 17 STA 6857 N0nparametric Spectral Estimation 45 Jamal 9 m Amer snume Anw allw Mum 39 3quotz23 A C m ri5 n Lag Window Generators 13 I 13 G K 3 gt A Bartlett me K t B Fejex 199 gin g 1 C Damell k 9 9 1 D PETE 1 PG 1 I 5 E Pmsn 2 me m 1 k F Pine 339 k 9 1 9 A 4 mm a G Tukey hamming me a 1 cos 1W 39 39 H Tukery hunning v I TukeyParzen me 436564m510 quot i J Narmal k 9 K Bohman we 1 a com Sin quot 39 u 9 L Pmen 4 me 1 6W1 a a s 21 ng 2 5 IL 39 f k kx1y397 1 41mm 4 at 4 0 From Last Time 0 Nonparametric Spectral Estimation a Sunspot Numbers Outline 0 From Last Time The x i ll De nition Given data x1 7x the discrete Fourier transform is de ned to be 7L dwj thfzmujt tl forj 017 7n 7 1 where w jn are the Fourier frequencies The periodogram is de ned at Fourier frequencies to be I wj where Amazingly Via some mathematical calculations we have the identity n71 V 1M 2 3006mmquot h7n71 Limitations of the Periodogram From its structure 1 n my 2 WWW h7 1171 it looks like a potential estimate of the spectral density given by f w Z WLVWM h7oo But there s a problem 21 f W What we would like is a limit to a constant in the right hand side of the above equation d 2 00 I l o Nonparametric Spectral Estimation R or Log Periodo gram o The log function is the variance stabilizing function of the periodogram the variance of the periodogram at w is proportional f w 0 So frequently the log periodogram is often presented Potential projectlog periodogram regression estimate of a long memory parameter 0 1500 0 500 Series soi Raw Periodogram bandwidth 000722 l I 3 4 Series rec Raw Periodogram bandwidth 0000722 A N m 1502 1e00 1502 1604 1502 1 1 2 3 frequency Series so Raw Periodogram Series rel Raw Periodogram nm symptotic Properties of the Periodogram Under general conditions of the time series 0 Bias biaslt1ltwjgtgt EltIltwjgtgt 4w 0 1 n Bias is very small 0 Variance ltIltwjgtgt 01 Variance is very large Let s strike a compromise Increase the bias lt gt Decrease the variance Ave raging the Periodogram T e Periodogram estimates at different Fourier frequencies are approximately independent So averaging neighboring estimates is the key to improving the estimate of the spectral density Closer L0 k at the Periodogram This is where we are n71 V M Z vlthgte4W h7n71 This is where we want to be m 2 MW 11700 One way of looking at the problem 3h is no good for values ofh close to n 16 Solution Reduce the in uence of at extreme values of 11 Consider the following estimator 1171 f w Z h3he zm h h7n71 where h starts out at 1 when h z 0 but then decreases as h increases n Nu mbz39rs uumul nl um Amarkn sunsva Magdalen Ma bar A Comparison of Lag Window Generators qty I H 5 H mm and Mmhads smlm A Bartlett page g M 5 53w 5 m B Feier k5 a lain g C Darnell kw T D Pmen 1 m9 1 I 9 E I mzen 2 k 9 m h 1 F Parzen 3 km m m G Tukey hummin ka a 1 cos 9 H Tukey hanning k 9 54 46 cosw I TukeyParzen Me 436 564 ms x0 J Normal me exp 45 K Bohman ma 1 a 0st 5i quota 39 L Pmen 4 me 1 esIu 9 s g kd 21 nf 92 an wn ni L w Mm 1 4me 1317 a Sunspot Numbers From Last Time Nonparametric Spsclral Estimation Arthur Berg 8 LA 6357 Nmgmuuunm Spawn Eutiumsim i 5 Sir Franz Arthur Friedrich Schuster FRS 185 1934 Schuster credited with the formulation of the periodogram 0 Arthur Schuster On Lunar and Solar Periodicities of Earthquakes Proceedings ofthe Royal Society of London Vol 61 1897 pp 455465Available Online Pwau11 mm vulvy Mun waver Sunsch quotumnr I l Neutpzm39ul 11111 E stun 39m on ime Smoothing the Sunsportiodogram 7 7 7 quotMr A Schuster of Owens College has ingeniously pointed out that the periods of good vintage in Western Europe have occurred at intervals somewhat approximating to eleven years the average length of the principal sunspot cycle quot William Stanley Jevons mm m m Pam m 5551 17 17 Arthur B erg Impact Factors Impact Factors 5 5 4 5 a 5 2 5 1 5 u InF lssslssues 0 Multivariate Time Series Modeling 9 VAR e VARIMAVARMAX Outline 0 Multivariate Time Series Modeling Examples We are considering multiple observations taken simultaneously Eg Meteorology temperature air pressure rainfall Economics retail price index gross domestic product unemployment level Advantages and Disadvantages ot Multivariate Modeling Advantages 9 more choices of models a better understanding of the system Disadvantages D harder to nd the right model 0 doesn t necessarily provide better forecasts parameter estimation misidenti cation error Parsimonious Model vs Complete Model Dif culty in EC mic Model Building 0 Feedback is not wellcontrolled like in certain physical systems such as a chemical reactor 7 experiments cannot be conducted to test the system 9 The economy has a complex nonlinear structure which can change in time and suffer from limited data sets 0 Using the wrong model is more serious than poor parameter estimation Open and cloud oop syslems Innul ownu Input Dnlpu 9 VAR Why VAR o Allows for the more dynamic closedsystem model 0 Single inputoutput model model is not always general enough 0 Two or more variables may arise on an equal footing Aut01 egressive Time Series We start with the vector time series with the given data structure x x1t7x2t77xmt For simplicity we will focus on the case m 2 The VAR1 model of dimension m 2 is described the following system of linear equations x1 gt11x1 71 gt12X2 71 81 x2 gt21x1 71 gt22X2 71 82 atnx F01m 0t VAR The matrix form of the above VAR1 is xt Plxtq 5t where 11 is a matrix given by 11 gt11 gt12 gt21 gt22 More generally we can write down an m dimensional VARp very compactly as ltIgtBxt e where ltIgtB 17ltIgt137lt1gt232 lt1gtpo and In are m x m matrices of parameters I l The multivariate white noise process 5 is assumed to satisfy P071390 C0V5t75tj 0 0 m7 Where To is any covariance matrix and 0m denotes the m x m matrix of zeros This assumption allows for covariance between 5139 and sit but 5139 must be uncorrelated w1lh Snark Stat1ona1 ity VAR Models Similar to the ARQI model the VARp model is stationary if the roots of detltDz all lie outlside the unit disk e VARIMAVARMAX VARIMA The natural generalization to the ARlMA model is the VARIMA model compactly written as ltIgtB1 7 Bdlxt bet One should consider cointegration of the time series before applying multiple differences Additionally exogenous variables u may be added to the ARMA model producing an ARMAX model with representation P q xt Put Z ijtij Z kWtik Wt k l 11 where P is an m x r parameter matrix

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.