New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Macro Theory

by: Madie Schinner

Macro Theory ECN 200E

Madie Schinner
GPA 3.57

Kevin Salyer

Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Kevin Salyer
Class Notes
25 ?




Popular in Course

Popular in Economcs

This 35 page Class Notes was uploaded by Madie Schinner on Tuesday September 8, 2015. The Class Notes belongs to ECN 200E at University of California - Davis taught by Kevin Salyer in Fall. Since its upload, it has received 60 views. For similar materials see /class/191858/ecn-200e-university-of-california-davis in Economcs at University of California - Davis.


Reviews for Macro Theory


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/08/15
Professor Salyer Economics 200E Spring 2002 Convergence of the value function an analytical solution We start with the maximization problem defined by the optimal growth problem with 100 depreciation That is max B Zlnct st k c km 20 Expressing this as a dynamic programming problem where the current capital stock is the state variable results in and letting 1 denote the Lagrange multiplier associated with the resource constraint vkt maxln C Bvk 21 1kza ct k21 To underscore the fact that the current period is irrelevant that is the only real decision is consumption today vs saving for tomorrow let the current state variable be denoted k and the choice variables ie policy variables be denoted c for consumption and y for end of period capital stock Then the basic dp problem becomes P vk lr maxln c Bvy Mk c y As was discussed in class we typically do not solve for the value function Instead we rely on the envelope theorem in order to express the derivative of the value function in terms of the derivatives of the functions that described tastes and technology However with these functional forms it is possible to solve for the value function analytically This is done by choosing an initial guess for the value function and then using the operator on de ned by P to generate a new value function This iteration is continued until the value function converges In this analytic framework the fixed point is a function consequently as we generate the sequence of value functions we will have to apply our insight in order to determine the functional form that the value function is converging to Step 1 With the initial value function defined to be zero ie v0 0 the d p problem is v1k l maxlnc Bvo maxlnc st k cy The solution is c k Substituting this into the RHS generates the new value function v1k 06 In k Step 2 The new value function is defined by v2 k lr maxlnc Bv1 maxln c 043 In y subject to the resource constraint Solving this yields Professor Salyer Economics 200E Spring 1999 1 a c ka 39y 39Bka 1 a 1 a As before substituting these optimal choices into the RHS of the dp problem generates the new value function 1 n a n v2 k ln 1a a l 1a a1 05ml k k V v lnk That is the new value function is linear in In k The trick is determining what the two parameters of the function are Both are determined as the limit of geometric sequences This can be seen by continuing with the process or more compactly V2 Step 3 v3k maXlnc 4ln a lf39i ala lny subject to the resource constraint The solution is 1 a a 1a Wk y W Using these in the RHS of the dp problem yields 1 2 a 1 v3 k ln 1 068 a ln mj ln mj a 1a lnala a2 21nk v3k v v311nk It 1s fairly clear that the coefflclent on the log of cap1tal 1s converg1ng to The constant 06 term is not as transparent To see where it is going another iteration produces the following value function Professor Salyer Economics 200E Spring 1999 Step 4 1 2 1 V4kmn1a az 2 ln 1a j 2 2 a 1oc ocz 2 aB1a a lnW 1 J ln 1ocBocZ 2oc3 3 B06B106Bln a398la398 1oc ocz 2 2 2 2 3 3 a ln1a ala a 05 B1nk The constant term contains the two sequences a and bf defined as j1 l a 2x and x ln Ell 1a a2 zocf 1quot H 172 17272 jrzrz b Z yt and y063106B06 HB Hlna 1a quot39a 398 J 0 1 a af quot 06 Note that lim x2 lnl 06B and lim y 1 ln 063 Then the limits ofthe a1 and 1 jaw a bf sequences are 1 1 a 7 a lima M and lim b l 1 39B In 063 Putting this all together produces the jam 1 B jam 1 053 limiting value function 71 n a39B n a n vk 1 I 1 a 1a l a 1a l k Lecture Notes on Dynamic Programming Economics 200E Professor Bergin Spring 1998 Adapted from lecture notes of Kevin Salyer and from Stokey Lucas and Prescott 1989 Outline 1 A Typical Problem 2 A Deterministic Finite Horizon Problem 21 Finding necessary conditions 22 A special case 23 Recursive solution 3 A Deterministic Infinite Horizon Problem 31 Recursive formulation 32 Envelope theorem 33 A special case 34 An analytical solution 35 Solution by conjecture 36 Solution by iteration 4 A Stochastic Problem 41 Introducing uncertainty 42 Our special case again 43 Finding distributions 1 A Typical Problem Consider the problem of optimal growth CassKoopmans Model Recall that in the Solow model the saving rate is imposed and there is no representation of preferences The optimal growth model adds preferences for households and derives an optimal saving rate Utility is maximized for the representative agent given the technology that they re faced with The social planner s problem may be described as follows Preferences are summarized in the utility function Uc t 0 700 Utility is assumed to be timeseparable that is marginal utility of consumption today depends on today s consumption only When households evaluate utility in the future they discount it by a constant factor 5 lt 17 assuming consumption in the future is not valued as much as consumption today The objective is to maximize the present discounted value of future utility 00 2 EU 0 20 Consider the technology Output is produced using capital as the input yr fUCt We could also include labor as a factor Note that we will use k to represent capital available at the beginning of period it this is capital that was accumulated in the previous period t 7 1 Capital accumulation takes place through investment and saving The law of motion for the capital stock is 91 k 1 i 6 6 Where i is the amount of investment expenditure in a period toward building up the capital stock for the following period Depreciation is represented by 5 and is the faction of existing capital stock that decays away each period We will initially assume there is 100 depreciation Investment expenditure and consumption expenditure must both come from current produc tion so the resource constraint for the social planner is C it f kt Combining this with the law of motion we will be using as our budget constraint the following 0 km f kt With complete depreciation the capital stock available in a period is derived completely from saving in the previous period So it is convenient to regard to the variable curl simply as the level of saving in period t The social planner problem may be written U or 1 max quotquotk39 39 1K 20 St 0 10 f kt The social planner chooses consumption and saving in each period to maximize utility of the representative household The solution is a sequence of variables 0 and 91 for all time periods t 07 00 Finding an infinite sequence of variables is a big task But fortunately this problem has a recursive structure the problem facing the social planner in each period is the same as that he faced last period or next period We will characterize the solution by a function called a policy rule which tells what the optimal choice is as a function of the current state of the economy In this case we will find a rule for choosing c and curl as a function of 16 which applies in each and every period 2 A Deterministic Finite Horizon Problem 21 Finding necessary conditions To develop some intuition for the recursive nature of the problem it is useful rst to consider a version of the problem for a finite horizon Assume you die in a terminal period T We will then consider using as a solution for the infinite horizon problem the solution we found for the finite horizon problem when we take a limiting case as T gt 00 The problem now may be written where we substitute the constraint into the objective function and thereby eliminate the consumption as a variable to choose 7 klfralgxmwo tUifUCt km Look at a section of the sum which pertains to a generic period 5 Uf kt kt1 HHUU km km This includes all the appearances for 19 1 Take a derivative with respect to 19 B U f kt k171 HU39 f km 7 1m f km 0 fort lt T or cancelling extra 3 U f kt kud 5Ufkt1 k2 f 16 fort lt T This is a necessary rst order condition that applies to all periods prior to the terminal pe riod It describes the nature of the intertemporal decision I am making about whether I should consume or save It says that I will raise consumption until the point where if I raise con sumption one more unit today the gain in utility no longer exceeds the loss in utility tomorrow because there is less capital and hence less output and consumption tomorrow But this condition does not apply to the terminal period In the last period of your life you will not save for the future but instead consume all that you produce k39r1 0 This is regarded as a boundary condition for the problem 22 A special case This problem can be solved analytically for a specific case with particular functional forms Consider a log utility U 0 1n 0 And a CobbDouglas production function f kt k The first order condition above then becomes 1 1 aka71 k km kl 02 m This is a secondorder difference equation which is difficult to solve We need to make it into a fistorder difference equation by using a change in variable kfl a k Define 2 savings rate at time t This turns out to be useful here because the utility function here implies a constant saving rate regardless of the level of income The first order condition then can be written 1 1 1 aka71 k lt1 72 km 1 Zw 1 which may be simplified z 065 1 1 i Z 1 Zt1 Zt1 1 065 Z This is a relation between tomorrow s savings rate and today s savings rate It is not a solution in itself because it is a relation between optimal choices in different periods not a rule telling use the optimal choice as a function of the current state of the economy Graph this relationship as a convex curve on a set of axes marked 2 and zur 1 Graph also a 45degree line showing where 2 2H 1 There are two points where the lines intersect If the economy finds its way to one of these points it will stay there One such point is where the saving rate is 10 This is implausible since it says all output is saved and none is consumed The other point where the saving rate is 045 a fraction of income less than one Verify that z 045 satisfies the condition above Recall that the problem is constrained by the boundary condition that saving is zero in the terminal period In the graph this means we start at the point 27 0 and work our way backward in time to find out what the solution is for the current period The graph suggests we will converge to the point where 27 045 Provided that the terminal period when you die is far enough in the future this saving rate will be solution to the optimal saving problem for the current period 23 Solving recursively We can perform this recursive operation explicitly Start at the boundary point 27 0 Now solve for saving in the previous period zrlHusing the first order condition above zrr 0 1 045 i 065 Z721 so 2 065 71 1 045 Now plug this back into the first order condition for the previous period 045 04 H7 1 7 z 1 1 045 Jr 063 2772 which produces 2 M 72 10451045 If we keep moving backwards ma a3 quot 4 a 3 117045 lim 2 lim 045 r OO r OO Since this is the solution for the current period for when the terminal period is infinitely far away it would be a natural conjecture that this is also the solution to an infinite horizon problem in which there is no terminal period We will investigate this conjecture below 3 A Deterministic In nite Horizon Problem 31 Recursive Formulation Lets consider again the infinite horizon problem 00 039 EU 0 problem l 525 c curl f k In general we can t just find the solution to the infinite horizon problem by taking the limit of the finitehorizon solution as T a 00 This is because we cannot assume we can interchange the limit and the max operators 7 7 max U 0 1120 max U 0 So we will take a di erent approach taking advantage of the problem s recursive nature called dynamic programming Although we stated the problem as choosing an infinite se quences for consumption and saving the problem that faces the household in period t 0 can be Viewed simply as a matter of choosing today s consumption and tomorrows beginning of period capital The rest can wait until tomorrow Recall that the solution we found for the finitehorizon problem suggested that the desired level of saving is a function of the current capital stock That is the rule specifies the choice as a function of the current state of the economy km 9 W Our goal in general will be to solve for such a function 9 called a policy function Define a function Mk called the value function It is the maximized value of the objective function the discounted sum of all future utilities given an initial level of capital in period t 0 for k0 v k0 max 2 U 0 7I kra1l i u30 Then 12161 is the value of utility that can be obtained with a beginning level of capital in periodt 1 of k and 512061 would be this discounted back to period t 0 So rewrite problem 1 above as 7 max UCo Z Ulctl z a Ii 1 k M I Um mam problem 2 100 k1 f 0 If we knew what the true value function was we could plug it into problem 2 above and do the optimization over it and solve for the policy function for the original problem 1 But we do not know the true value function For convenience rewrite with constraint substituted into objective function M160 max lUCf k0 k151k1 This is called Bellman s equation We can regard this as an equation where the argument is the function 1 a functional equation It involves two types of variables First state variables are a complete description of the current position of the system In this case the capital stock going into the current period kt is the state variable Second control variables are the variables that must be chosen in the current period here this is the amount of saving k If consumption 00 had not been substituted out in the equation above it too would be a control variable The first order condition for the equation above is U if 7 kll 5 Mil This equates the marginal utility of consuming current output to the marginal utility of allocat ing it to capital and enjoying augmented consumption next period 32 Envelope theorem We would like to get rid of the term 1 in the necessary condition Assume a solution for the problem exists and it is just a function of the state variable Ci 9 0 So M160 mafoCf k0 k1 5 1 becomes where we can drop the max operator U 0 Uf k0 9 7 5 9 7 Now totally differentiate where everything is a function of kn 1 0 U 1 9 60 fl 0 i U 1 9 Wongk0 51 9 160 9 k0 1 0 U f 0 9 7 fl 0 i U f 0 i 9 7 i 51 9 kog M The FOC says the second term equals zero so 1 0 U 1 k1 fko Update one period 7061 U fk1k2f k1 Or in a more compact form easier to remember the envelope condition here is 1 Url1fl k We can use this to get rid of term in first order condition The F OC then becomes U ifk0 k1 5Ufk1 k2 fl W This tells us that the marginal value of current capital in terms of total discounted utility is 10 given by the marginal utility of using the capital in current production and allocating its return to current consumption 33 Apply to our special case Lets use again f 16 kj and u 0 1n 0 This time let s solve by a Lagrangian instead of substituting the constraint into the objective function The problem is stated 1 16 niax In 0 51 k1 7 1 l I l 515 c 16 k Rewrite this in Bellman form U kt fax 1n 0 5 kt1 A k 0t km 7 r 1 Differentiate to derive the first order conditions or combining them 1 I 1 C Q 5 lt a Let s derive the envelope condition in this case In general you can use a short cut but let s do it the longer way here Write the solution to all variables as functions of the state variable 0 c 16 16 k 16 Then we may write the Bellman equation with all arguments as a function of 16 71 kt 1n 0 762 571 16062 M10 k i 0 kt k 1 Now differentiate with respect to 16 c 1 kt F 5 kt1kii A W C kt kml A 04 1 Cl kiwi I Eliminate the term that equals zero because of the constraint and regroup 1 ail U kt 0 g i A km 5 kt1 A AraC I The first two terms here are zero also because of the first order conditions Substitute for A and update one period 1 ail U kl 0 kr1 01 This is the envelope condition here A simpler way to get the envelope condition would be just to take the derivative of the original problem U kt 110 5Ukr1 f f kt 0r 70H with respect to 16 This gives you the same result since all the other terms drop out in the end Shifted up one period this also would give us via At1flkt1 And when combined with the FOC A U 7 we get the same envelope condition 14 Utl1fl k Now substitute the envelope condition into FOC 1 1 065 16 0 CHM So the two necessary conditions for this problem are the equation above and the budget con straint 0 10 k 34 Solution by iterative substitution in our special case In this special case we can solve explicitly for a solution Rewrite the FOC and budget con straint k k 1 2 1 a k0 Ct1 0 1 7 1 C 0 Substitute the FOC into the constraint to get a consolidated condition k k 1 065i 0 Ct1 Update one period the consolidated condition above and substitute it into itself kf kf J lo 1 045 7 Ct2 Do this recursively Note that it is a geometric progression 1 065 oz3 oz3V L c 1 7 045 So the policy function is 0 1 i 065 k Note that this is the answer we guessed earlier based on the finite horizon problem 35 Other solution methods Solution by conjecture In general the functional forms will not allow us to get an analytical solution this way There are several other standard solution methods 1 solution by conjecture 2 solution by iteration 3 others we will talk about later Suppose we suspect because of the form of the utility function that the amount that the household saves should be a constant fraction of their income but we don t know what this fraction 6 is 16 6k or equivalently c 1 7 6 k Divide the two equations above 01 9 c 1 7 6 and substitute this into the FOC 01 kl 0 a Ct1 065 ktl 6 CHI 1 Substitute the consumption function for our kt 1 6 17 H 045 6 We again reach the same solution as before 0 1 i 065 k 36 Solution by value function iteration Another solution method is based on iteration of the value function The value function actu ally will be diiTerent in each period just as we earlier found the function g was diiTerent depending on how close we were to the terminal period But it can be shown but we do not show this here that as we iterate through time the value function converges just as g con verged in our earlier example as we iterated back further away from the terminal period This suggests that if we iterate on an initial guess for the value function even a guess we know is incorrect the iterations eventually will converge to the true function Suppose we start with a guess for some period T 1 U0 0141 0 This is similar to assuming a terminal period in which I die Our guess for the value function implies that the discounted value of all future utility is zero which implies that I consume all wealth and save nothing in this period or kf and lo 1 0 So in previous period 121 In 10 We put a subscript on the value function because this is changing over time as it converges to some function In the period before that the problem is 122167 4 maXlncrr1 5121 15 07 67 16 or 122 krlH maxln cpl 3 1n Off 01 kgil Do the optimization L lncrr1 ln 10 A 16171 7 0771 7 k7 FOCs are A 1 C I il and 1 A 045 CT 1 045 0121 701 Substitute into budget constraint and get 065 k I 1045 I k7l and 1 0 kf I l 1 065 71 Now plug these solutions into the value function 122 1 I 42 1401 In map 5m Gd 55143171 1 04 045 In 045 7 1 045 In 1 045 1 045 Inkyl Then write for previous period U3 0122 max In 0722 5 k39IHH Orig k39r1 and so on It can be shown that this sequence of functions converges to vk maxlnc5lt1i5il ln170451n045 lnku Once we know the true value function we can solve for the policy function The FOC still holds that says U C 5100 Now we can say that 04 1 k U H1 1 i 065 91 So 1 045 1 C 1 i 045 kt1 or c 1 7 045 01 065 Combine with the resource constraint 0 k 7 1 01 01 to get 1 7 043 k i 065 10 and so 91 065k which is same solution as before Although this solution method is very cumbersome and computationally demanding be cause it relies on explicit iteration of the value function it has the advantage that it will always work It is common to set up a computer routine to perform the iteration for you 4 A stochastic Problem 41 Introducing uncertainty One benefit of analyzing problems using dynamic programming is that the method extends very easily to a stochastic setting in which there is uncertainty We will introduce uncertainty as affecting the production technology only by including a random term in the production function y zf 16 where z is iid Here 2 is a sequence of independently and identically distributed random variables This technology shock varies over time but it s deviations in di erent periods are uncorrelated These shocks may be interpreted as technological innovations crop failures etc We assume that households maximize the expected utility over time Assume utility takes the same additively separable form as in the deterministic case but now future consumption is uncertain The social planner problem now is 00 Clifllfl m E1 5 U 0 St 0 10 th k Where the expectations operator EU indicates the expected value with respect to the probability distribution of the random variables 0 CH 2 over all 25 based on information available in period t 0 Assume the timing of information and action is as follows at the beginning of the period t the current value of z the exogenous shock is realized So the value of technology and hence total output is known when consumption takes place and when the endofperiod capital 01 is accumulated The state variables are now k and z The control variables are c and kw As before we can think about a social planner in the initial period choosing a whole se 18 quence of consumption saving pairs 07 16 But now in the stochastic case this is not a sequence of numbers but rather a sequence of contingency plans one for each period The choice of consumption and saving in each period is contingent on the realization of the tech nology and capital stock in that period This information is available during each period when consumption and saving are executed for that period but it is not available in the initial period when the initial decision is being made for all future periods A solution is a rule that shows how the control variables are determined as a function of these state variables both the capital stock and the exogenous technology shock 0 C kt 7 Z 01 k kn Z Note that all the variables involved here are random variables now because they are functions of the exogenous shock which is a random variable The setup here is very similar to the deterministic case except that the variables involved are random and the decision involves expectations over these We again can write the problem recursively in Bellman form 1 16 2 niax U 0 Ew k1z1 4 r H 525 c 16 zf 16 Note that the expectations operator may be thought of as integrating over the probability dis tribution of the shock 0073 max 1105Ukt17221hzt1dzt1 m k n where h 2r 1 is the distribution of the shock Write this as a Lagrangian U 0721 mkax U C 5E1 kfl7zfl A 21f kt C 91 7 l H Take first order conditions 0 U c 7A 0 01 1 53 kl7zfl t 0 U C EWig H kt17 Zt1 Envelope condition Pic 7973 NZth kt U 0 zf 16 So the necessary condition becomes U 0 5E U CHM Zt1fl kl This has the same interpretation as always you equate the marginal bene t of extra consump tion today to the marginal cost in terms of lost production and hence consumption tomorrow 42 Our special case again Again we can demonstrate a solution analytically for the special case of log utility and Cobb Douglas technology f kt Ztk and u 0 Inc The necessary condition then becomes 1 1 E z 046071 0 5 2L 1 1 20 In the deterministic version we found a solution was a constant saving rate of 045 7 so that 01 065k and 0 1 7 045 k Let s conjecture a similar solution for the stochastic problem of some constant saving rate 6 where we take into consideration that income varies with the exogenous shock 2 91 6216 and 0 1 i 6 ak Plug these into the necessary condition above to see if it is satisfied 1 1 E 071 remc 0 5 1 9Zt1k 121 1 a 5 176 km 045 1 1 7 6 6216 Which holds if we choose the constant saving rate to be 6 045 Note the expectations operator was dropped because the variable k1is known in period t 43 Finding distributions Because these solutions are for random variables we would like to be able to characterize their distributions To do this we need to assume a convenient distribution for the underlying stochastic shock We assume that it follows a log normal distribution with mean 5 and variance 0 lnzN102 Because 2 is distributed log normal and the saving level cur is a function of it saving must 21 also be distributed log normal We can characterize the distribution in terms of its mean and variance which will be functions of u and a First let s find the mean of the saving variable Take logs of the solution found above for saving 01 065215 Ink lnoa lnzoalnk Now exploit the recursive nature of the problem lnk lnoa lnz1 oalnk1 lnoa lnz1 04 lnoa lnz2 aln 192 1 a 042 04 lnoa lnz1 oalnz2 0711an 06 1n k0 As we move in time the initial value of saving disappears from the expression and the other expressions become a pair of geometric series Take the limit as t a 0 then take the expected value to find the mean of its distribution 1 1 limo E In 16 lnoa u This is the mean of distribution for saving It is a function of the mean of the underlying shock Now let s find the variance of the distribution of saving var In 16 E ln k i E In 22 E 2 ll lO bll lZtil oalnkl 7 lna lt1ia 2 lnoa lnz1oalnoa lnz2oalnk2 7 lnoa u E 1704 lt lt 0 M 71 4 2 E 0717i In 045 1n 2 7 Z 0717i In 045 M 71 2 E 0717i 1n z 7 M 2 21 multiplying this out and taking limit lim E m k i E In 192 71 lim E 042l 71 ilE lnzi 7 mg E 204l 71 i04 1 3E 1n z 7 u lnzj 7 u 2 20 4 71 20714 A Ilia 04 var 2 mt U2 1 7 042 The second set of terms in the first line above is zero since the shocks were assumed to have a zero covariance across periods Again the variance of the saving variable is a function of the variance of the underlying technology shock The steps in the analysis above will be applied throughout the course to other situations Dynamic programming will be used to nd the equilibrium policy functions giving the control variables as functions of current state variables Then by assuming a convenient distribution for the exogenous state variables we can use the policy function to describe the distribution of the control variables 23 Interpreting the Eigenvalues in a Symmetric Stochastic Matrix Kevin D Salyer April 11 2003 Consider the following nestate Markov process for the random variable 1 act 1 The oneeperiod transition probability matrix with the entry in the ith row and jth column denoting the conditional probability of going from state i to state j is 177T 17w 17w 1 n7 n71 5171 i i n71 7T n71 H 177r 177r 2 T71 I ilil J J 3171 7quot n71 77r n71 77 There will be 71 eigenvalues associated with the stochastic matrix but since the columns and rows sum to one we know that one of the eigenvalues will be equal to l The proof is easy H 1 1 The unconditional probabilities pl 171pr pn are given by the solution to HTp p Since a matrix and its transpose have the same eigenvalues we see that the eigenvector associated with the eigenvalue of unity is the vector of unconditional probabilities he remaining 7171 eigenvalues are not distinct Let A denote this eigenvalue and since the sum of the eigenvalues equal the trace of a matrix we have n71m39ril Or 1 717139 7 3 n 7 1 Let the vector of realizations for act be symmetric around zero so that p x 0 That is the unconditional mean of acEac 0 1 next show that the eigenvalue A is the rsteorder autocorrelation of am That is A Corr 1 n1 Without proof7 I rst state that the vector x is an eigenvector associated with A It can be shown that x can be expressed as a linear combination of the n 7 1 eigenvectors associated with the nonedistinct eigenvalue A Hence we have H x Ax The leftehand side is simply the vector of conditional expectations of n1 Write this as E1 MM 361 E2 t1 2 4 En M44 951 t De ne the diagonal matrix D as 1 0 0 0 352 0 D 0 0 0 xn Then multiplying both sides of eq 4 by D produces E11tt1 90 E2 2tt1 x2 A 2quot 5 En Magma 90 Mulitplying both sides by p and using the fact that E 35 07 we have C01 1 n1 AVar 35 Which establishes that A Corr 1 n1 Professor Salyer Economics 200E A brief introduction to discrete state Markov processes Suppose the random variable x can take on n possible values denoted x1 139 123 n A Markov process for x is de ned by the property PrxM x x xwxH xkxH xh PrxM lext x1 l That is the current realization provides all the information needed for making forecasts For expositional purposes we will use primarily a twostate Markov process Possible realizations for x are x x where x1 ltx2 The transition probabilities are denoted lj Prxt 1 x l x2 x1 and are given in matrix notation 1 n 11 712 721 722 Bold print denotes either a vector or matrix Note that each row of the transition probability matrix is a conditional probability distribution hence the elements of the rows must sum to one The conditional probability of being in state j in period tk denoted k 71 can be constructed from the onestep transition probability matrix For instance for k 2 we have 2711 711711 i27t21 7K 7 713 713 713 2 12 11 12 12 22 2 2 nnn 2721 721711 227 Y21 2722 722722 2i7t12 That is the k step transition probability matrix is calculated by raising the onestep transition probability matrix to the power of k 3 k u k 12jl Ik k 21 k 22 Professor Salyer Economics 200E The quotquot 39 l 39 39 quotquot39 ie the quotquot 39 l 39 39 quotquot ofbeing in state i denoted pl can be calculated the taking the following limit 4 13 liml39Ik p2 k w In the limit the columns of the 2x2 matrix Hk become identical implying that the initial state is irrelevant This limiting distribution is the unconditional or ergodic distribution for x2 Intuitively one can interpret the limiting probabilities as the fraction of realizations of xi observed over an infmite horizon An alternative characterization of the limiting distribution is in terms of eigenvalues and eigenvectors First note that the quotquot 39 and quotquot 39 l 39 39 quotquot39 must satisfy the following equations p1 pi u p27t21 p2 pJ JZ p27t22 5 Or expressing this in matrix form 6 p1 BTW 3p HTP p2 p2 where T denotes the transpose Rearranging the terms yields 7 HTilp0 1 denotes the identity matrix To interpret this expression recall the general definitions of the eigenvalues and eigenvectors of a matrix A In general if A is an n x n ie square matrix then 8 q2gtIA7211 is an nth order polynomial in it denotes the determinant The n roots of qt are called the eigenvalues ofA For each 139 1n IA 7 2 III 0 so A 7 2 II is singular Hence there exists a nonzero vector 11 satisfying 9 Au 2 In Any vector satisfying the above equation is called an eigenvector of A for the eigenvalue 2 I Professor Salyer Economics 200E With these de nitions it is clear that eq 6 implies that p is the eigenvector of H T for the eigenvalue of one Hence one of the eigenvalues of the transition probability matrix is one in fact this must be the case since the columns of H T sum to one As an example consider the following case in which the transition probability is symmetric 10 n 7 177 177 7 The limiting probabilities must satisfy eq 7 7t 7 1 1 7 7t 1 7 7t 7t 7 1 p 2 Multiplying the first row and using the fact that the unconditional probabilities must sum to one yields the following equation 1 12 7K1p117717p10 3 101 This implies that for the symmetric 2 x 2 transition probability matrix the limiting probabilities are the same This result generalizes to the nstate case ie pl 1 n the ergodic distribution is uniform for a symmetric transition probability matrix Since the matrix H T is a 2 x 2 matrix we know that there is another eigenvalue One way to find its value is to solve the quadratic equation implied by eq 8 A much simpler way is to use the fact that the trace of a matrix the sum of the diagonal elements is equal to the sum of the eigenvalues Since we have shown that one of the eigenvalues is l the other eigenvalue must be 13 Itu7t2212t23227tu7t2271 This eigenvalue is important in that it determines the serial correlation properties of the random variable x2 What is the eigenvector associated with this eigenvalue As an illustration consider the following twostate process x1 7x 7 1 7 7 x2 H x2 x 1 7 7t 7 Since the transition probability matrix is symmetric we know that p71 p2 12 This information can be used to calculate the following unconditional moments Ext Varxt Covxt x2 1 The first two are straightforward Professor Salyer Economics 200E 1 1 14 Ex 7 p1x1 m2 7 37x 3x 0 Since the mean is zero the variance is 2 2 2 1 2 1 2 2 15 Varx2 Ext pjxj p2x2 37x 3x x The covariance term is not as straightforward its value can be found by using the relationship that the unconditional mean of a random variable is equal to the unconditional mean of the conditional means Speci cally 16 COVx2xz1 Ex2x21 p1E1x1x21 p2E2x2x21 1 1 37tx12 17 7tx1x2 37tx22 1 7 7tx2x1 IKXZ 7177tx2 7tx2 7177tx2 47tx2 72x2 2 2 71 So as stated above the other eigenvalue determines in fact is the autocorrelation of xi This condition is intuitive if 7 gt 12 then the probability of staying in either state is greater than going to the other states like states follow like states For a general 2state model see eq 13 the Corrx2xmgt 0 as It gt 1 7 722 or If gt 721 the expression can also be expressed in terms of state 2 Hence positive serial correlation is implied if the probability of the previous state being the same as the current state is greater than the probability of the previous state being the other state A useful nstate Markov process A symmetric transition probability matrix is convenient in that all the elements of the matrix are determined by the value for 7 in addition this parameter is pinned down by the autocorrelation properties of the series being studied eg money growth However the drawback of this characterization is that the ergodic distribution is uniform The Markov process described below has a transition probability matrix whose elements are also determined by a single parameter 7 which in turn is characterized by the serial correlation properties of xi as in the symmetric transition probability matrix case Corrxt x2 1 27 7 1 The advantage of this process is that as the number of states increases the ergodic distribution approaches a normal distribution Professor Salyer Economics 200E Let the possible values for x be evenly distributed between x and x n denotes the number of states and H be the onestep transition probability matrix For n 2 it is assumed that the transition probability matrix is symmetric 17 n2 7 17 17 7 Then H can be constructed recursively as follows First compute the matrix M n 2 3 de ned as Uni 0 0 Hm OT 0 0 UT US Mn 7 jaml Tj ruizl j 0 0 0 0 DH 0 0 HM Where 0 is an n 1x1 vector of zeros This matrix can not be a transition probability matrix since all rows except the top and bottom do not sum to one This is due to the rotating of the original transition probability matrix the interior elements are overrepresented note that the weights on the matrices sum to 2 Therefore to construct H it is necessary to divide all rows except the top and bottom by 2 An numerical example is presented below Suppose 7 075 then M 3 is constructed as 19 075 025 0 0 075 025 0 0 0 0 0 0 M3 075 025 075 0 025 0 025 075 025 075 025 0 075 0 075 025 0 0 0 0 0 0 025 075 0 0 025 075 This results in 05625 0375 00625 20 M3 0375 125 0375 00625 0375 05625 Dividing the middle row by two gives the 3state transition probability matrix 05 625 03 75 00625 21 13913 01875 0625 01875 00625 03 75 05625 To obtain the ergodic distribution exponentiate the above matrix to a high power I used 50 This resulted in Professor Salyer Economics 200E 025 050 025 22 P3 025 050 025 025 050 025 Iterating once more to obtain a 4state transition probability matrix we first compute M 4 05 625 03 75 00625 0 0 05 625 03 75 00625 01875 0625 01875 0 0 01875 0625 01875 M 4 075 025 00625 0375 05625 0 0 00625 0375 05625 0 0 0 0 0 0 0 0 23 0 0 0 0 0 0 0 0 05625 03 75 00625 0 0 05625 03 75 00625 025 075 01875 0625 01875 0 0 01875 0625 01875 00625 03 75 05 625 0 0 00625 03 75 05 625 This yields 0421875 0421875 0140625 0015625 0281 25 1031 25 0593 75 0093 75 0093 75 0593 75 1031 25 0281 25 0015625 0140625 0421875 0421875 24 M4 Dividing the middle rows by two produces H 4 0421875 0421875 0140625 0015625 0140625 0515625 0296875 0046875 0046875 0296875 0515625 0140625 0015625 0140625 0421875 0421875 25 n4 Again exponentiating this matrix to the power of 50 produces the ergodic distribution Professor Salyer Economics 200E 0125 0125 0125 0125 25 P4 0375 0375 0375 0375 03 75 03 75 03 75 03 75 0125 0125 0125 0125 Without presenting the details one more iteration produced the following 5state transition probability matrix 031 6 01 05 26 H5 0035 001 2 0004 The implied ergodic distribution is 27 0422 0422 0234 0110 0047 0211 0351 0461 0351 0211 00625 025 03 75 025 00625 004 7 01 1 0 023 4 0422 0422 0004 0012 0035 01 05 031 6


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Amaris Trozzo George Washington University

"I made $350 in just two days after posting my first study guide."

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.