### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# The-Best-PDF-ever

### View Full Document

## 14

## 0

## Popular in Course

## Popular in Business

This 81 page Document was uploaded by an elite notetaker on Monday December 21, 2015. The Document belongs to a course at a university taught by a professor in Fall. Since its upload, it has received 14 views.

## Similar to Course at University

## Reviews for The-Best-PDF-ever

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 12/21/15

M. Li Calzi, Lecture notes on Financial Markets Copyright ▯2002 by Marco Li Calzi Introduction In 2000, when Bocconi University launched its Master in Quantitative Finance, I was asked to develop a course touchingupon the many aspects of decision theory, ﬁnancial economics, and microstructure that could not otherwise ﬁt in the tight schedule of the program. Re- ﬂecting this heterogeneity, the course was dubbed “Topics in Economics” and I was given a fair amount of leeway in its development. My only constraint was that I had to choose what I thought best and then compress it in 15 lectures. These notes detail my choices after two years of teaching“Topics in Economics” at the Master in Quantitative Finance of Bocconi University and a similar class more aptly named “Microeconomics of ﬁnancial markets” at the Master of Economics and Finance of the Venice International University. The material is arranged into 15 units, upon whose contents I make no claim of origi- nality. Each unit corresponds to a 90-minutes session. Some units (most notably, Unit 5) contain too much stuﬀ, reﬂectingeither the accretion of diﬀerent choices or my desire to oﬀer a more complete view. Unit 7 requires less time than the standard session: I usually take advantage of the time left to begin exploring Unit 8. I have constantly kept revisingmy choices (and my notes), and I will most likely do so in the future as well, postingupdates on my website at http://helios.unive.it/˜licalzi. i ii Contents 1. Expected utility and stochastic dominance 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . 1.2 Decisions under risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 1.3 Decisions under uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Irreversible investments and ﬂexibility 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . 2.2 Price uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 . 2.3 Real options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. . 2.4 Assessingyour real option . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 3. Optimal growth and repeated investments 11 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. . 3.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. 3.3 The log-optimal growth strategy . . . . . . . . . . . . . . . . . . . . . . . .12 3.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. . 3.5 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15. . 4. Risk aversion and mean-variance preferences 16 4.1 Risk attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 . 4.2 Risk attitude and expected utility . . . . . . . . . . . . . . . . . . . . . .17 4.3 Mean-variance preferences . . . . . . . . . . . . . . . . . . . . . . . . . . .17 4.4 Risk attitude and wealth . . . . . . . . . . . . . . . . . . . . . . . . . . . 19. 4.5 Risk bearingover contingent outcomes . . . . . . . . . . . . . . . . . . . . 19 5. Information structures and no-trade theorems 21 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21. . 5.2 The Dutch book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.3 The red hats puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 . 5.4 Diﬀerent degrees of knowledge . . . . . . . . . . . . . . . . . . . . . . . . .24 5.5 Can we agree to disagree? . . . . . . . . . . . . . . . . . . . . . . . . . . .25 5.6 No trade under heterogenous priors . . . . . . . . . . . . . . . . . . . . . . 27 6. Herding and informational cascades 29 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29. . 6.2 Some terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29. 6.3 Public oﬀerings and informational cascades . . . . . . . . . . . . . . . . . . 30 6.4 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32. . 7. Normal-CARA markets 33 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33. . 7.2 Updatingnormal beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 7.3 Cara preferences in a normal world . . . . . . . . . . . . . . . . . . . . . . 34 iii 7.4 Demand for a risky asset . . . . . . . . . . . . . . . . . . . . . . . . . . .35 . 8. Transmission of information and rational expectations 36 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 . . 8.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37 . 8.3 Computinga rational expectations equilibrium . . . . . . . . . . . . . . . . 39 8.4 An assessment of the rational expectations model . . . . . . . . . . . . . . .39 9. Market microstructure: Kyle’s model 41 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41 . . 9. heol.............................1....... 9.3 Lessons learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43. . 9.4 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 . . 10. Market microstructure: Glosten and Milgrom’s model 45 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 . . 10.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45. . 10.3 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 . 10.4 Comments on the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 10.5 Lessons learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48. . 11. Market microstructure: market viability 50 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50 . . 11.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50 . 11.3 Competitive versus monopolistic market making. . . . . . . . . . . . . . . 51 11.4 The basic steps in the model . . . . . . . . . . . . . . . . . . . . . . . .51 . 11.5 Lessons learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53. . 12. Noise trading: limits to arbitrage 54 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 . . 12.2 A model with noise trading. . . . . . . . . . . . . . . . . . . . . . . . . 54. 12.3 Relative returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 . . 12.4 An appraisal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 . . 12.5 Excursions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58 . . 13. Noise trading: simulations 60 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 . . 13.2 A simple dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 13.3 An artiﬁcial stock market . . . . . . . . . . . . . . . . . . . . . . . . . . 63. 14. Behavioral ﬁnance: evidence from psychology 65 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 . . 14.2 Judgement biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65 . 14.3 Distortions in derivingpreferences . . . . . . . . . . . . . . . . . . . . . 66. 14.4 Framingeﬀects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68 . iv 15. Behavioral ﬁnance: asset pricing 70 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 . . . 15.2 Myopic loss aversion . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 . . 15.3 A partial equilibrium model . . . . . . . . . . . . . . . . . . . . . . . . 71. . 15.4 An equilibrium pricingmodel . . . . . . . . . . . . . . . . . . . . . . . . 73. v vi 1. Expected utility and stochastic dominance 1.1 Introduction Most decisions in ﬁnance are taken under a cloud of uncertainty. When you plan to invest your money in a long-term portfolio, you do not know how much will its price be at the time of disinvestingit. Therefore, you face a problem in choosingtht” portfolio mix. Decision theory is that branch of economic theory which works on models to help you sort out this kind of decisions. There are two basic sorts of models. The ﬁrst class is concerned with what is known as decisions under risk and the second class with decisions under uncertainty. 1.2 Decisions under risk Here is a typical decision under risk. Your investment horizon is one year. There is a family of investment funds. You must invest all of your wealth in a single fund. The return on each fund is not known with certainty, but you know its distribution of past returns. For lack of better information, you have decided to use this distribution as a proxy for the probability 1 distribution of future returns. Let us model this situation. There is a set C of consequences, typiﬁed by the one- year returns you will be able to attain. There is a set A of alternatives (i.e., the funds) out of which you must choose one. Each alternative in A is associated with a probability distribution over the consequences. For instance, assumingthere are only three funds, your choice problem may be summarized by the followingtable. Fund α Fund β Fund γ return prob.ty return prob.ty return prob.ty -1% 20% -3% 55% 2.5% 100% +2% 40% +10% 45% +5% 40% Havingdescribed the problem, the next step is to develop a systematic way to make a choice. Def. 1.1 [Expected utility under risk] Deﬁne a real-valued utility function u over conse- quences. Compute the expected value of utility for each alternative. Choose an alternative which maximizes the expected utility. 1 The law requires an investment fund to warn you that past returns are not guaranteed. Trusting the distribution of past returns is a choice you make at your own peril. 1 How would this work in practice? Suppose that your utility function over a return of r% in the previous example is u(r)= r. The expected utility of Fund α is U(α)= −1 · 0.2+2 · 0.4+5 · 0.4=2 .6. Similarly, the expected utility of Fund β and γ are respectively U(β)=2 .85 and U(γ)=2 .5. Accordingto the expected utility criterion, you should g o for Fund β and rank α and γ respectively second and third. If you had a diﬀerent utility function, the ranking and your ﬁnal choice might change. √ For instance, if u(r)= r + 3, we ﬁnd U(α) ≈ 2.31, U(β) ≈ 1.62 and U(γ) ≈ 2.35. The best choice is now γ, which however was third under the previous utility function. All of this sounds ﬁne in class, but let us look a bit more into it. Before you can get her to use this, there are a few questions that your CEO would certainly like you to answer. Is expected utility the “right” way to decide? Thank God (or free will), nobody can pretend to answer this. Each one of us is free to develop his own way to reach a decision. However, if you want to consider what expected utility has in it, mathematicians have developed a partial answer. Usingexpected utility is equivalent to takingdecisions that satisfy three criteria: 1) consistency; 2) continuity; 3) independence. Consistency means that your choices do not contradict each other. If you pick α over β and β over γ, then you will pick α over γ as well. If you pick α over β, you do not pick β over α. Continuity means that your preferences do not change abruptly if you slightly change the probabilities aﬀectingyour decision. If you pick α overβ, it must be possible to generate a third alternative α by perturbingslightly the probabilities of α and still like α better than β. Independence is the most demandingcriterion. Let α and β be two alternatives. Choose ▯ a third alternative γ. Consider two lotteries: α gets you α or γ with equal probability, while β ▯gets you β or γ with equal probability. If you’d pick α over β, then you should also pick α over β . If you are willingto subscribe these three criteria simultaneously, usingexpected utility guarantees that you will fulﬁll them. On the other hand, if you adopt expected utility as your decision makingtool, you will be (knowing ly or not) obeyingthese criteria. The answer I’d oﬀer to your CEO is: “if you wish consistency, continuity and independence, expected utility is right”. Caveat emptor! There is plenty of examples where very reasonable people do not want to fulﬁll one of the three criteria above. The most famous example originated with Allais who, among other things, got the Nobel prize in Economics in 1988. Suppose the consequences are given as payoﬀs in millions of Euro. Between the two alternatives α β payoﬀ prob.ty payoﬀ prob.ty 0 1% 1 100% 1 89% 5 10% 2 Allais would have picked β. Between the two alternatives γ δ payoﬀ prob.ty payoﬀ prob.ty 0 90% 0 89% 5 10% 1 11% he would have picked γ. You can easily check (yes, do it!) that these two choices cannot simultaneously be made by someone who is willingto use the expected utility criterion. Economists and ﬁnancial economics, untroubled by this, assume that all agents abide by expected utility. This is partly for the theoretical reasons sketched above, but mostly for convenience. To describe the choices of an expected utility maximizer, an economist needs only to know the consequences, the probability distribution over consequences for each alternative, how to compute the expected value and the utility function over the consequences. When theorizing, we’ll do as economists do: we assume knowledge of conse- quences, alternatives and utility functions and we compute the expected utility maximizing choice. For the moment, however, let us go back to your CEO waiting for your hard-earned wisdom to enlighten her. What is the “right” utility function? The utility function embeds the agent’s prefer- ences under risk. In the example above, when the utility function was u(r)= r, the optimal choice i√ Fund β which looks a lot like a risky stock fund. When the utility function was u(r)= r + 3, the optimal choice was Fund γ, not much diﬀerent from a standard 12- month Treasury bill. It is the utility function which makes you prefer one over another. Pickingthe riht utility function is a matter of describinghow comfortable we feel about taking(or leavi) risks. This is a tricky issue, but I’ll say more about it in Lecture 4. Sometimes, we are lucky enough that we can make our choice without even knowing what exactly is our utility function. Suppose that consequences are monetary payoﬀs and assume (as it is reasonable) that the utility function is increasing. Are there pairs of alternatives α and β such that α is (at least, weakly) preferred by all sorts of expected utility maximizers? In mathematical terms, let F and G be the cumulative probability distributions respec- tively for α and β. What is the suﬃcient condition such that ▯ ▯ u(x)dF(x) ≥ u(x)dG(x) for all increasingutility functions u? Def. 1.2 [Stochastic dominance] Given two random variables α and β with respective cumulative probability distributions F and G, we say that α stochastically dominates β if F(x) ≤ G(x) for all x. Stochastic dominance of α over β means that F(x)= P(α ≤ x) ≤ P(β ≤ x)= G(x) for all x. That is, α is less likely than β to be smaller than x. In this sense, α is less likely to be small. 3 If you happen to compare alternatives such that one stochastically dominates the other ones and you believe in expected utility, you can safely pick the dominatingone without even worryingto ﬁnd out what your “right” utility function should be. This may not happen often, but let us try not to overlook checkingfor this clearcut comparison. Isn’t this “expected utility business” too artiﬁcial? Well, it might be. But we are not askingyou to use expected utility to take your decisions. Expected utility is what economists use to model your behavior under risk. If you happen to use a diﬀerent route which fulﬁlls the three criteria of consistency, continuity and independence, an economist will be able to ﬁt your past choices to an expected utility model and guess your future choices perfectly. We can put it down to a matter of decision procedures. Expected utility is one: it is simple to apply but it requires to you to swallow the idea of a utility function. There are other procedures which lead you to choices that are compatible with expected utility maximization in a possibly more natural way. Here is an example of an alternative procedure. Suppose that you are a fund manager and that your compensation depends on a benchmark. Your alternatives are the diﬀerent investing strategies you may follow. Each strategy will lead to a payoﬀ at the end of the year which is to be compared against the benchmark. If you beat the benchmark, you’ll get a ﬁxed bonus; otherwise, you will not. The performance of the benchmark is a random variable B. Usingpast returns, you estimate its cumulative probability distribution H. Moreover, since you are only one of many fund managers, you assume that the performance of the benchmark is independent of which investingstrategy you follow. Your best bet is to maximize the probability of getting the bonus. If your investing strategy leads to a random performance α with c.d.f. F, the probability of getting the bonus is simply ▯ ▯ P(α ≥ B)= P(x ≥ B)dF(x)= H(x)dF(x). While (naturally) tryingto maximize your chances of g ettingyour bonus, you will be be- havingas if (artiﬁcially) tryingto maximize a utility functionu(x)= H(x). What is the “right” probability distribution for an alternative? Ah, that’s a good question. You might have read it already but, thank God (or free will), nobody can answer this. Each one of us is free to develop his own way to assess the probabilities. In the example above, I mischievously assumed that you were willingto use the distribution of past returns but this may sometimes be plainly wrong. Economists have since longrecog nized the importance of this question. Their ﬁrst- cut answer is to isolate the problem by assumingthat the probabilities have already been estimated by someone else and are known to the agent. Whenever this assumption holds, we speak of decision makingunder risk. Whenever we do not assume that probabilities are already known, we enter the realm of decisions under uncertainty. 2 This is drawn from research developed in Bocconi and elsewhere. See Bordley and LiCalzi (2000) for the most recent summary. 4 1.3 Decisions under uncertainty Here is a typical decision under uncertainty. Your investment horizon is one year. There is a family of investment funds. You must invest all of your wealth in a single fund. The return on each fund is not known with certainty and you do not think that the distribution of past returns is a good proxy. However, there is a list of scenarios upon which the return on your investment depends. Let us model this situation. There is a set C of consequences, typiﬁed by the one-year returns you will be able to attain. There is a list S of possible future scenarios. There is a set A of alternatives (i.e., the funds) out of which you must choose one. Each alternative α in A is a function which tells you which consequence c you will be able to attain under scenario s: that is, α(s)= c. Assumingthere are only three funds, your choice problem may be summarized by the followingtable. Fund α Fund β Fund γ scenario return return return s1 -1% -3% 2.5% s2 +2% -3% 2.5% s3 +2% -3% 2.5% s4 +2% +10% 2.5% s5 +5% +10% 2.5% Def. 1.3 [Expected utility under uncertainty] Assess probabilities for each scenario. De- ﬁne a utility function over consequences. Compute the expected value of utility for each alternative over the scenarios. Choose an alternative which maximizes the expected utility. After you assess probabilities for each scenario, you fall back to the case of decision under risk. For instance, assessing P(1 ) = 20%, P(s2) = 30%, P(s 3= P(s )4= 5%, and P(s 5) = 40% gets you back to the case studied above. If your utility function were u(r)= r, the optimal choice would again be β. However, stayingwith the same utility function, if you’d happen to assess P(s 1= P(s )=2P(s )= P(4 ) = 5%,5and P(s ) = 80%, 3he optimal choice would be γ. Under uncertainty, the analysis is more reﬁned. What matters is not only your attitude to risk (as embedded in your choice of u), but your beliefs as well (as embedded in your probability assessment). Explicitingscenarios may matter in a surprisingway, as it was noted in Castag noli (1984). Suppose the consequences are given as payoﬀs in millions of Euro. Consider the followingdecision problem under uncertainty. Fund α Fund β scenario payoﬀ payoﬀ s 0 4 1 s2 1 0 s3 2 1 s4 3 2 s5 4 3 5 Suppose that you assess probabilities P(s )11 /3, and P(s )= 2(s )= P(3 )= P(s 4= 5 1/6. Then β would stochastically dominate α even though the probability that α beats β is P(α ≥ β)=2 /3. Any expected utility maximizer (if usingan increasingutility function) would pick β over α. However, if you are interested only in choosingwhichever alternative pays more between the two, you should go for α. References [1] R. Bordley e M. Li Calzi (2000), “Decision analysis usingtarg ets instead of utility functions”, Decisions in Economics and Finance 23, 2000, 53–74. [2] E. Castagnoli (1984), “Some remarks on stochastic dominance”,Rivista di matematica per le Scienze Economiche e Sociali 7, 15–28. 6 2. Irreversible investments and flexibility 2.1 Introduction Under no uncertainty, NPV is the common way to assess an investment. (In spite of contrary advice from most academics, consultants and hence practitioners use the payback time and the IRR as well.) If you have to decide whether to undertake an investment, do so only if its NPV is positive. If you have to pick one amongmany possible investments, pick the one with the greatest NPV. When uncertainty enters the picture, the easy way out is to keep doingNPV calculations usingexpected payoﬀs instead of the actual payﬀos, which are not known for sure. This might work as a ﬁrst rough cut, but it could easily led you astray. The aim of this lecture is to alert you about what you could be missing. Most of the material is drawn from Chapter 2 in Dixit and Pyndick (1994). 2.2 Price uncertainty Consider a ﬁrm that must decide whether to invest in a widget factory. The investment is irreversible: the factory can only be used to produce widgets and, if the markets for widgets should close down, the ﬁrm could not be scrapped down and sold to someone else. The ﬁrm can be built at a cost of c = 1600 and will produce one widget per year forever, with zero operatingcost. The current price of a et is p0= 200, but next year this will rise to p = 300 with probability q =1 /2 or drop to p = 100 with probability 1 − q =1 /2. After 1 1 this, the price will not change anymore. The risk over the future price of widgets is fully diversiﬁable and therefore we use the risk-free rate of interest r = 10%. Presented with this problem, a naive CFO would compute the expected price p = 200 from the next year on. Usingthe expected price, the NPV of the project is ▯∞ 200 NPV = −1600 + ≈ 600. (1.1)t t=0 Since the NPV is positive, the project gets the green light and the ﬁrm invests right away. A clever CFO would consider also the possibility of waitingone year. At the cost of giving up a proﬁt of 2000 in year t = 0, one gains the option to invest if the price rises and to not invest otherwise. The NPV for this investment policy is ▯ ∞ ▯ 1 −1600 ▯ 300 NPV = + t ≈ 773. 2 1.1 t=1 (1.1) This is higher than 600, and therefore it is better to wait than to invest right away. The value of the ﬂexibility to postpone the investment is 773 − 600 = 173. 7 Ex. 2.1 For a diﬀerent way to assess the value of ﬂexibility, check that the opportunity to build a widget factory now and only now at a cost of c = 1600 yields the same NPV as the opportunity of buildinga widg et factory now or next year at a cost of (about) 1980. Ex. 2.2 Suppose that there exists a futures market for widgets, with the futures price for delivery one year from now equal to the expected future spot price of 200. Would this make us anticipate the investment decision? The answer is no. To see why, check that you could hedge away price risk by sellingshort futures for 11 widgets, endingup with a sure NPV of 2200. Subtract a cost of 1600 for buildingthe factory, and you are left exactly with an NPV of 600 as before. The futures market allows the ﬁrm to get rid of the risk but does not improve the NPV of investingnow. 2.3 Real options We can view the decision to invest now or next year as the analogof an american option. An american option gives the right to buy a security any time before expiration and receive a random payoﬀ. Here, we have the right to make an investment expenditure now or next year and receive a random NPV. Our investment option begins “in the money” (if it were exercised today, it would yield a positive NPV), but waitingis better than exercisingnow. This sort of situations, where the underlyingsecurity is a real investment, are known as real options. The use of real options is getting increasingly popular in the assessment of projects under uncertainty. Let us compute the value of our investment opportunity usingthe real options approach. Denote by F 0he value the option today, and by F 1he value next year. Then F i1 a random variable, which can take value ▯ ▯ ▯∞ 300 − 1600 ≈ 1700 (1.1)t t=0 with probability 1/2 (if the widget price goes up to 300) and value 0 with probability 1/2 (if it goes down). We want to ﬁnd out what is F . 0 Usinga standard trick in arbitrage theory, consider a portfolio in which one holds the investment opportunity and sells short n widgets at a price of P . The value of this portfolio 0 today is Π0= F −0P = F0−200n0 The value of the portfolio next year is Π = F −n1 . 1 1 Since P = 300 or 100, the possible values of Π are 1700 − 300n or −100n. We can choose 1 1 n and make the portfolio risk-free by solving1700 − 300n = −100n, which gives n =8 .5. This number of widgets gives a sure value Π = −850 for the portfolio. 1 The return from holdingthis portfolio is the capital gain Π 1− Π m0nus the cost of shortingthe widgets; that is, Π − Π − 170 = −850 − (F − 1700) − 170 = 680 − F . Since 1 0 0 0 this portfolio is risk-free, it must earn the risk-free rate of r = 10%; that is, 680 − 0 = (0.1)Π =0 .1(F − 1700), which gives F = 773. This is of course the same value we have 0 0 0 already found. 8 2.4 Assessing your real option Once we view an investment opportunity as a real option, we can compute its dependence on various parameters and get a better understanding. In particular, let us determine how the value of the option — and the decision to invest — depend on the cost c of the investment, on the initial price P of the widgets, on the magnitudes of the up and down movements in 0 price next period, and on the probability q that the price will rise next period. a) Cost of the investment. Using the arbitrage argument, we ﬁnd (please, do it) that the short position on widgets needed to obtain a risk-free portfolio is n =1 6 .5 − 0.005c. Hence, Π =1F − n1 =0 .51 − 1650 and Π = F − 3300 + c.0Imposinga risk-free rate of r = 10% yields F = 1500 − 0.455c, (1) 0 which gives the value of the investment opportunity as a function of the cost c of the investment. We can use this relationship to ﬁnd out for what values of c investingtoday is better than investingnext year. Investingtoday is better as longas the value V0from investingis greater than the direct cost c plus the opportunity cost F . S0nce the NPV of the payoﬀs from investingtoday is 2200, we should invest today if 2200 >c +F 0. Substitutingfrom (1), we should invest as longas c< 1284. In the terminology of ﬁnancial options, for low values of c the option is “deep in the money” and immediate exercise is preferable, because the cost of waiting(the sacriﬁce of the immediate proﬁt) outweighs the beneﬁt of waiting (the ability to decide optimally after observingwhether the price has g one up or down). b) Initial price. Fix again c = 1600 and let us now vary P . Ass0me that with equal probability the price P 1ill be twice or half the current price P (a0d remain at this level ever after). Suppose that we want to invest when the price goes up and we do not want to invest if it goes down (we will consider other options momentarily). Set up the usual portfolio and check (yes, do it) that its value is Π =1 6 .5P −01600 − 1.5nP if t0e price goes up and Π =1−0.5nP is th0 price goes down. Equating these two values, we ﬁnd that n =16 .5 − (1600/P ) is the number of widgets that we need to short to make the portfolio 0 risk-free, in which case Π 1 800 − 8.25P whe0her the price goes up or down. Recall that the short position requires a payment of 0.1nP =1 .65P −160 and compute 0 0 the return on this portfolio. Imposinga risk-free rate of r = 10%, we have Π −Π1−[1.05P − 0 160] = 0.1Π which yields 0 F0=7 .5P −0727. (2) This value of the option to invest has been calculated assumingthat we would only want to invest if the price goes up next year. However, if P i0 low enough we might never want to invest, and if P0is high enough it might be better to invest now rather than waiting. Let us ﬁnd for which price we would never invest. From (2), we see that F = 0 wh0n P ≈ 97. Below this level, there is no way to recoup the cost of the investment even if the 0 price rises by 50% next year. Analogously, let us compute for which price we would always 9 invest today. We should invest now if the NPV of current payoﬀs (i.e., 11P ) ex0eeds the total cost 1600+F of investingnow. The critical price P satisﬁes 11P −1600 = F which, 0 0 0 0 after substitutingfrom (2), ives P0= 249. Summarizing, the investment rule is: Price region Option value Investment rule if 0 ≤ 97 then F 0 0 and you never invest; if 97 <P 0≤ 249 then F 07 .5P − 027 and you invest next year if price goes up; if 0 > 249 then F 011 P − 0600 and you invest today. c) Probabilities. Fix an arbitrary P a0d let us vary q. In our standard portfolio, the number of widgets needed to construct a risk-free position is n =8 .5 and is independent of q (yes, check it). The expected price of widgets next year is E(P )= 1(1.5P )+(1 0 − q)(0.5P 0=( q+0.5P ); 0herefore the expected capital gain on widgets is [E(P )−P1]/P =0 0 q − 0.5. Since the longowner of a widg et demands a riskless return of r = 10% but gets already a capital gain of q − 0.5, she will ask a payment of [0.1 − (q − 0.5)]0 =(0 .6 − q)P 0 per widget. Setting Π −Π −(0.6−q)nP =0 .1Π with n =8 .5 we ﬁnd (for P > 97) that 1 0 0 0 0 the value of the option is F0= (15P −01455)q. (3) For P 0≤ 97 we would never invest and F =0.0 What about the decision to invest? It is better to wait than to invest today as longas ▯ ∞ t F 0V − 0. Since V = P0+ 0 1[(q +0 .5)P 0/(1.1) =(6+10 q)P0, it is better to wait as longas P < P = (1600 − 1455q)/(6 − 5q) — yes, check this. Note that P decreases as 0 0 0 q increases: a higher probability of a price increase makes the ﬁrm more willing to invest today. Why? d) Magnitudes. Fix q =0 .5 and let us change the magnitudes of variations in price from 50% to 75%. This leaves E(P )= P but increases the variance of P . As usual, we 1 0 1 construct a risk-free portfolio by shorting n widgets. The two possible values for Π are 1 19.25P − 1600 − 1.75nP if the price goes up and −0.25nP if the price goes down — yes, 0 0 0 check this. Equatingthese two values and solvingfor n gives n =12 .83−(1067/P ), 0hich makes Π =1267 − 3.21P irre0pective of P . Imp1singa risk-free rate of r = 10% (please ﬁll in the missingsteps) yields F 08 .75P −0727. (4) At a price P0 = 200, this gives a value F0= 1023 for the option to invest signiﬁcantly higher than the 773 we found earlier. Why does an increase in uncertainty increase the value of this option? Ex. 2.3 Show that the critical initial price suﬃcient to warrant investingnow instead rather than waitingis P ˆ0≈ 388, much larger than the 249 found before. Can you explain why? References [1] A.K. Dixit and R.S. Pyndick (1994), Investment under Uncertainty, Princeton (NJ): Princeton University Press. 10 3. Optimal growth and repeated investments 3.1 Introduction Standard portfolio theory treats investments as a single-period problem. You choose your investment horizon, evaluate consequences and probabilities for each investment opportunity and pick a portfolio which, for a given level of risk, maximizes the expected return over the investment horizon. The basic lesson from this static approach is that volatility is “bad” and diversiﬁcation is “good”. The implicit assumptions are that you know your investment horizon and that you plan to make your investment choice once and for all. However, when we begin working over multiperiod investment problems, some of the lessons of the static approach take a whole new ﬂavour. The aim of this lecture is to alert you about some of the subtleties involved. Most of the material is drawn from Chapter 15 in Luenberger (1998). 3.2 An example At each period, you are oﬀered three investment opportunities. The followingtable reports their payoﬀs to an investment of Euro 100. An identical but independently distributed selection is oﬀered each period, so that the payoﬀs to each investment are correlated within each period, but not across time. α β γ scenario payoﬀ payoﬀ payoﬀ prob.ty s1 300 0 0 1/2 s2 0 200 0 1/3 s3 0 0 600 1/6 You start with Euro 100 and can invest part or all of your money repeatedly, reinvesting your winnings at later periods. You are not allowed to go short, but you can apportion your investment over diﬀerent opportunities. What should you do? Consider the static choice over a single period. There is an obvious trade-oﬀ between pursuingthe rowth of the capital and avoidingthe risk of losingit all. More precisely, for an investment of 100, the ﬁrst lottery has an expected value of 150 and a 50% probability of losingthe capital. The second lottery has an expected value of (about) 67 and a 66 .6% probability of losingthe capital. The third lottery has an expected value of 100 and a 83 .3% probability of losingthe capital. Comparing β against γ, Lottery β minimizes the risk of beingruined while γ oﬀers the highest expected return. However, note that α dominates β and γ under both respects. If you want to maximize your expected gain, investing 100 in α is the best choice. 11 This intuition does not carry over to the case of a multiperiod investment. If you always invest all the current capital in α, sooner or later this investment will yield 0 and therefore you are guaranteed to lose all of your money. Instead of maximizing your return, repeatedly bettingthe whole capital on α guarantees your ruin. Let us consider instead the policy of reinvestingyour capital each period in a ﬁxed- proportion portfolio (α1,α 2α 3, with α i 0 for i =1 ,2,3 and α + 1 + α2≤ 1.3Each of these portfolios leads to a series of (random) multiplicative factors that govern the growth of capital. For instance, suppose that you invest Euro 100 usingthe (1 /2,0,0) portfolio. With probability 50%, you obtain a favorable outcome and double your capital; with probability 50%, you obtain an unfavorable outcome and your capital is halved. Therefore, the multi- plicative factors for one period are 2 and 1/2, each with probability 50%. Over a longseries of investments followingthis stratey, the initial capital will be multiplied by a multiple of the form ▯ ▯▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯▯ ▯ 1 1 1 1 1 1 (2) (2)(2) ... (2) 2 2 2 2 2 2 with about an equal number of 2’s and (1/2)’s. The overall factor is likely to be about 1. This means that over time the capital will tend to ﬂuctuate up and down, but is unlikely to grow appreciably. Suppose now to invest usingthe (1 /4,0,0) portfolio. In the case of a favorable outcome, the capital grows by a multiplicative factor 3/2; in the case of an unfavorable outcome, the multiplicative factor is 3/4. Since the two outcomes are equally likely, the average multi- plicative factor over two periods is (3/2)(3/4) = 9/8. Therefore, the average multiplicative ▯ factor over one period is (9/8) ≈ 1.06066. With this strategy, your money will grow, on average, by over 6% per period. Ex. 3.4 Prove that this is the highest rate of growth that you can attain using a (k, 0,0) portfolio with k in [0,1]. Ex. 3.5 Prove that a ﬁxed-proportions strategy investing in a portfolio (α 1,α2,α 3 with min i i 0 and max α <i1iguarantees that ruin cannot occur in ﬁnite time. 3.3 The log-optimal growth strategy The example is representative of a large class of investment situations where a given strategy leads to a random growth process. For each period t =1 ,2,... , let X dtnote the capital at period t. The capital evolves accordingto the equation Xt = R t t−1, (5) where R is the random return on the capital. We assume that the random returns R are t t independent and identically distributed. In the general capital growth process, the capital at the end of n trials is X n =( R n n−1...R 2 )1 . 0 12 After a bit of manipulation, this gives ▯ ▯ 1/n ▯n log X n = 1 logR . X 0 n t t=1 Let m = E(logR )1 Since all Rt’s are independent and identically distributed, the law of large numbers states that the right-hand side of this expression converges to m as n → +∞ and therefore ▯ X ▯1/n log n → m X0 as well. That is, for large values oftt, X is asymptotic0to . Roughly speaking, the capital tends to grow exponentially at rate m. It is easy to check (please, do it) that m + 0ogX = E(lo1X ). Thus, if we choose the utility function U(x) = logx, the problem of maximizingrowth rate m is equivalent to ﬁndingthe straty that maximizes the expected value ofEU(X )1and applyingthis same strategy in every trial. Using the logarithm as a utility function, we can treat the problem as if it were a single-period problem and this single-step view guarantees the maximum growth rate in the long-run. 3.4 Applications a) The Kelly criterion. Suppose that you have the opportunity to invest in a prospect that will either double your investment or return nothing. The probability of the favorable outcome is p> 1/2. Suppose that you have an initial capital0of X and that you can repeat this investment many times. How much should you invest each time to maximize the rate of growth of the capital? Let α be the proportion of capital invested in each period. If the outcome is favorable, the capital grows by a factor 1+α; if it is unfavorable, the factor is 1−α. In order to maximize the growth rate of his capital, you just need to maximize m = plog(1+α)+(1 −p)log(1−α) to ﬁnd the log-optimal value α =2 p − 1. This situation resembles the game of blackjack, where a player who mentally keeps track of the cards played can adjust his strategy to ensure (on average) a 50.75% chance ∗ m of winninga hand. With p = .5057, α =1 .5% and thus e ≈ 1.01125, which gives an (expected) .00125% gain each round. b) Volatility pumping. Suppose that there are only two assets available for investment. One is a stock that in each period doubles or halves your capital with equal probability. The other is a risk-free bond that just retains value — like puttingmoney under the mattress. Neither of these investments is very exciting. An investment left in the stock will have a value that ﬂuctuates a lot but has no overall growth rate. The bond clearly has no growth rate. Nevertheless, by usingthese two investments in combinatrowth can be achieved! 13 Suppose that we invest α of our capital in the stock and (1 − α) in the bond, with α in [0,1]. The expected growth of this strategy is ▯ ▯ 1 1 1 m = log(1 + α)+ log 1 − α , 2 2 2 ∗ m which is maximized at α =1 /2. For this choice of α, e ≈ 1.0607 and the growth rate of the portfolio is about 6% per period which signiﬁcantly outperforms the palsy 0% average growth of each of the two assets. The gain is achieved by usingthe volatility of the stock in a pumpingaction. Remember that the strategy says that no more (and no less) than 50% of the stock should go in the stock in each period. When the stock goes up in certain period, some of its capital gains are reinvested in the bond; when it goes down, additional capital is shifted from the bond to the stock. Capital is pumped back and forth between the two assets in order to achieve growth greater than either could provide alone. Note that this strategy follows automatically the dictum “buy low and sell high” by the process of rebalancing the investment in each period. Ex. 3.6 Suppose that the two assets are stocks that in each period double or halve the capital with equal probability. Assume that each asset moves independently of the other. Prove that the optimal portfolio has α = 50% and that the growth rate per period is about 11.8%. c) Optimal growth portfolio. Let us go back to the example of Section 3.2. Consider all portfolio strategies 1αα2,α 3, with α i 0 for i =1 ,2,3 and α +1α + 2 ≤ 13 What is the portfolio which achieves the maximum growth? Under scenario s1, the return is R =1+2 α −1α − α2. Un3er scenario s , th2 return is R =1 − α +1α − 2 . U3der scenario s , th3 return is R =1 − α − α +1 α .2To ﬁn3 an optimal portfolio, it suﬃces to maximize m = 1log(1 + 2α − α − α )+ 1 log(1 − α + α − α )+ 1 log(1 − α − α +5 α ). 2 1 2 3 3 1 2 3 2 1 2 3 Ex. 3.7 Show that (1/2,1/3,1/6) is an optimal portfolio. Prove that it is not unique, by checkingthat (5 /18,0,1/18) is also optimal. Any optimal portfolio has a growth rate of about 6.99%, higher than what was found above. d) Continuous lotteries. There is one stock that can be purchased at a price of Euro 100 per share. Its anticipated price in one year is uniformly distributed over the interval [30,200]. What is the log-optimal strategy over this period? The log-optimal strategy is found by maximizing ▯ ▯ ▯ 2 10 m = [log(1 − α + αr)] dr. .3 17 We ﬁnd that the log-optimal investment in the stock is α∗ ≈ .63, which gives a growth rate of about 4.82% per period. 14 3.5 Excursions An extensive list of several properties of the log-optimal strategy is given in MacLean et alii (1992). This paper and Li (1998) discuss the trade-oﬀs between growth and security in multiperiod investment analysis. References [1] Y. Li (1993), “Growth-security investment strategy for long and short runs”, Manage- ment Science 39, 915–924. [2] D.G. Luenberger (1998), Investment Science, New York: Oxford University Press. [3] L.C. MacLean, W.T. Ziemba and G. Blazenko (1992), “Growth versus security in dy- namic investment analysis”, Management Science 38, 1562–1585. [4] L.M. Rotando and E.O. Thorp (1992), “The Kelly criterion and the stock market”, American Mathematical Monthly 99, 922–931. 15 4. Risk aversion and mean-variance preferences 4.1 Risk attitude Consider a choice problem amongthe followingthree lotteries, whose expected values are written by their name. α (480) β (525) γ (500) payoﬀ prob.ty return prob.ty return prob.ty 480 100% 850 50% 1000 50% 200 50% 0 50% If we were to base our choices on the expected value, β would be our preferred choice. However, there are people who would rather pick α (which on average pays less) on the ground that is less “risky” or maybe γ on the ground that it is an even riskier choice. What can we say about the elusive notion of “risk”? While it is hard to deﬁne what exactly “risk” means, certainly a sure outcome like α should be deemed riskless. (And avoidingrisk justiﬁes its choice in the example above.) By contrast, let us call risky any lottery which does not yield a sure outcome. Although this is a poor distinction, it recognizes that risk lies in the unpredictability of the resultingoutcomes. However, avoidingrisk cannot be the only reason drivinga choice. If we use the expected value of a lottery as a benchmark to measure its “return”, we should argue that β is a more “proﬁtable” choice (on average). An agent who chooses α is avoidingrisk at the cost of acceptinga lower “return”. On the contrary, an aent who always pick the lottery with the highest expected value is not aﬀected by risk considerations. Def. 4.4 An agent is risk neutral if he evaluates lotteries by their expected value. If an agent is not risk neutral, he may be repelled by or attracted to risk: think of people buyinginsurance or respectively playinglotto. How do we tell which case is which? There is a simple question we may ask the agent: suppose you possess lottery β; how much sure money would you ask for sellingit? We name this “price” c(β) called by the agent the certainty equivalent of a lottery. If the agent does not care about risk, he should be willing to exchange β for a sum equivalent to its expected value. That is, we should have c(β)= E(β). We can deviate from equality in two directions. If c(β) <E (β), the agent values β less than his expected value: therefore, the risk in β reduces the value to him of holdingit — we say that he is risk averse. Vice versa, if c(β) >E (β), he is risk seeking. For a diﬀerent way to say exactly the same thing, deﬁne the risk premium r(β)o fa lottery β as the diﬀerence between its expected value and its certainty equivalent. A risk averse agent is willing to forfeit the risk premium r(β) in order to replace the “risky” lottery β by the sure outcome c(β) <E (β). That is, he is willingto accept for sure a payment which is less than the average payoﬀ of β in order to get rid of the risk associated with β. 16 4.2 Risk attitude and expected utility All of this holds in general, even if the agent is not an expected utility maximizer. However, in the special case of expected utility maximizers, there exists a simple criterion to recognize whether an agent is risk averse, neutral or seeking. Thm. 4.5 An expected utility maximizer is risk neutral (resp., averse or seeking) if his utility function is linear (resp., concave or convex). Thus, while the increasingmonotonicity of the utility function speaks about threed- iness of the agent, its curvature tells us something about his attitude to risk. Ex. 4.8 Check that expected utility can rationalize any of the three choices in the example above usingdiﬀerent utility functions. If an expected utility maximizer has a utility function √ 2 u1(x)= x he prefers β;ifitis u2(x)= x he prefers α; and if it 3s u (x)= x he prefers γ. This is evidence of the ﬂexibility of the expected utility model. Here is a simple application. There are two assets. One is a riskless bond that just retains its value and pays 1 per euro invested. The other is a risky stock that has a random return of R per euro invested; we assume that E(R) > 1 so that on average the stock is more proﬁtable than the riskless bond. Suppose that an agent is risk-averse and maximizes the expected value of a (concave and strictly increasing) utility function u over returns. The agent must select a portfolio and invest a fraction α of his wealth in the risky asset and a fraction 1 − α in the riskless bond. Short-sellingis not allowed and thus α is in [0,1]. The maximization problem is max EuααR +1 − α). Risk aversion implies that the objective function is concave in α (can you prove it?). Therefore, the optimal portfolio satisﬁes the ﬁrst-order Kuhn-Tucker condition: ▯ ▯ = 0 if 0 <α< 1 E (R − 1)u ▯(αR +1 − α) ≤ 0fi α =0 . ≥ 0fi α =1 Since E(R) > 1, the ﬁrst-order condition is never satisﬁed for α = 0. Therefore, we conclude that the optimal portfolio has α > 0. That is, if a risk is actuarially favorable, then a risk averter will always accept at least a small amount of it. 4.3 Mean-variance preferences There exist alternative approaches to the formalization of risk. One that is very common relies on the use of indices of location and dispersion, like mean and standard deviation. The expected value is taken as a measure of the (average) payoﬀ of a lottery. Risk, in- stead, is present if the standard deviation (or some other measure of dispersion) is positive. The preferences of the agent are represented by a functional V (µ,σ), where µ and σ are respectively the expected value and the standard deviation of the lottery. If oﬀered several lotteries with the same standard deviation, a (greedy) agent prefers the one with the highest expected value. If oﬀered several lotteries with the same expected value, a risk averse agent prefers the one with the lowest variance. Thus, a greedy and risk 17 averse agent has preferences represented by a functional V (µ,σ) which is increasingin µ and decreasingin σ. While intuitively appealing, this approach postulates that the agent dislikes any kind of positive standard deviation. It turns out that this is not consistent with the deﬁnition of risk aversion given above. Therefore, the so-called “mean-variance” preferences are in general incompatible both with the standard deﬁnition of risk aversion and in particular with the expected utility model. Ex. 4.9 Suppose that a risk averse agent is an expected utility maximizer with utility function u(x)= x for x ≤ 1 and u(x)=1+0 .1(x − 1) for x> 1. Compare a lottery α oﬀeringa payoﬀ of 1 with probability 1 versus another lottery β oﬀeringa payoﬀ of 1 .1 with probability 10/11 and 0 with probability 1/11. While µ(α)= µ(β) and σ(α)=0 <σ (β), the agent strictly prefers β to α. If one wishes to relate this approach with the expected utility model, the best she can ∗ do is to view it as a crude approximation. Given an arbitrary value x , consider the Taylor expansion of u around x ∗: ∗ ∗ 2 ∗ 3 u(x)= u(x )+ (x − x ) u▯(x )+ (x − x ) u▯(x )+ (x − x ) u ▯(x )+ ... (6) 1! 2! 3! The “mean-variance” approach ignores the third and all successive terms in the Taylor expansion of the utility function. Or, in more statistical terms, it looks only at the ﬁrst two moments of the probability distribution of the lotteries. Consider the two lotteries α β payoﬀ prob.ty return prob.ty -1 0.999 1 0.999 999 0.001 -999 0.001 Since they have the same mean and the same standard deviation, the second-order ap- proximation cannot tell them apart. However, most people is not indiﬀerent between the two. If we want to distinguish α from β, we need to reach the third term in (6), which represents skewness. Skewness is zero for a symmetric distribution; it is positive if there is a hump on the left and a long thin tail on the right; and negative in the opposite case. So α is positively skewed and β is negatively skewed. Most commercial lotteries and games of chance are positively skewed: if people like them because of this, the second-order approximation cannot capture their preferences. Even if crude, the second-order approximation may be justiﬁed by two diﬀerent kinds of assumptions: either (i) the utility function has a special form (namely, it is quadratic and therefore the approximation is correct); or, (ii) the probability distributions belongto a particular family which is completely characterized by the ﬁrst two moments (for instance, they are normal). 18 4.4 Risk attitude and wealth The risk attitude is especially studied with reference to choices involvingthe wealth of an agent. To keep things simple, assume in the following that all risky decisions concern monetary payoﬀs and that agents maximize expected utility. The utility functions are deﬁned over the positive reals and, whenever necessary, they are twice diﬀerentiable with a strictly positive ﬁrst derivative. We consider one way in which wealth aﬀects the risk attitude. Suppose that an agent with current wealth w must choose between a risky lottery α and a sure outcome b.fIh e chooses α, his future wealth will be w + α; if he chooses b, w + b. Suppose that at the current level of wealth the agent prefers the risky α to the sure b. If he is an expected utility maximizer, this implies that Eu(w + α) >u (w + b). Def. 4.6 An agent has decreasing risk aversion (with respect to wealth) whenever, for ▯ ▯ ▯ arbitrarily given α and b, Eu(w + α) >u (w + b) implies Eu(w + α) >u (w + b)if w >w . Similar deﬁnitions holds for constant and increasingrisk aversion. There exists a simple criterion to recognize whether an agent has decreasing, constant or increasing risk aversion. Thm. 4.7 An agent has decreasing risk aversion (resp., constant or increasing) if and only if his coeﬃcient of (absolute) risk aversion u (x) λ(x)= − u (x) is decreasing(resp., constant or increasing ) in x. The standard assumption in economic theory is that agents are risk averse and have decreasingrisk aversion. However, in applications it is very common to postulate that they are risk neutral or that they have constant risk aversion because this greatly simpliﬁes the choice of their utility function. Thm. 4.8 The only utility functions with constant absolute risk aversion are the linear utility function u(x)= x which has λ(x) = 0, and the exponential utility function u(x)= −kx −sgn(k)e which has λ(x)= k. Ex. 4.10 Suppose that the agent is an expected utility maximizer with a constant coeﬃcient of absolute risk aversion k> 0. The choice set contains only lotteries normally distributed. Given a lottery X ∼ N(µ,σ), check that the preferences of the agent can be represented by 2 the functional V (µ,σ)= µ − (1/2)kσ . 4.5 Risk bearing over contingent outcomes Suppose that there is a ﬁnite number of states of the worlds (or scenarios). Each state s i (i =1 ,2,...,n ) occurs with probability π . Tiere exists a single commodity, which has a price piin state s i The agent is endowed with the same initial income y in each scenario and he derives a diﬀerentiable utility u(c i) from consuminga quantity c of tie commodity 19 in the scenario si. When the agent is an expected utility maximizer, he chooses his state contingent consumption by solving ▯ ▯ max π i(c i s.t. p i i y. i i ▯ Assumingan interior solution, the ﬁrst-order condition requires [iπ(ci)]/pito be constant for all i. This result states that the risk-bearingoptimum has the same expected marinal utility per dollar of income in each and every state and is known as the fundamental theorem of risk-bearing. References [1] J. Hirshleifer and J.G. Riley (1992), The Analytics of Uncertainty and Information, Cambridge University Press. [2] D. Kreps (1988), Notes on the Theory of Choice, Westview Press. [3] A. Mas-Colell, M.D. Whinston and J.R. Green (1995), Microeconomic Theory, Oxford University Press. 20 5. Information structures and no-trade theorems 5.1 Introduction One traditional view about tradingin ﬁnancial markets is that this has two components: liquidity and speculation. Some people trade because they need the liquidity (or have other pressingdemands from the real economy); others trade because they have asymmetric information and hope to proﬁt from it. Accordingto this view, hig h volume tradingshould be explained mostly by diﬀerences in information amongtraders. See for instance Ross (1989): “It is diﬃcult to imagine that the volume of trade in security markets has very m

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.