New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here


by: Weldon Rau I


Weldon Rau I
GPA 3.98


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Agricultural Economics And Business

This 22 page Class Notes was uploaded by Weldon Rau I on Friday September 18, 2015. The Class Notes belongs to AEB 6533 at University of Florida taught by Staff in Fall. Since its upload, it has received 24 views. For similar materials see /class/206595/aeb-6533-university-of-florida in Agricultural Economics And Business at University of Florida.

Popular in Agricultural Economics And Business




Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/18/15
Lecture XXV Brownian Motion Ito s Lemma and Stochastic Optimal Control I Introduction to Stochastic Process A Most of the material in the section originates fromDiXit Avinash The Art of Smooth Pasting Buffalo NY Gordon and Breach Publishing Co 1993 The publisher can be reached on the network at httpwwwgbhapcom B Brownian Motion 1 N Brownian motion refers to a continuous time stochastic process where X evolves over time according to some stochastic differential equation dx u dt 039 dw where dt represents an increment in time anddw denotes a random component The expected value of X given an initial condition X becomes X0ut with a variance of 52t Random Walk Representation To make the formulation more concrete we will begin by depictingAX as a random walk with a probability p that the value of X will increaseAh at any given time period and a probability lp that the value of X will decrease byAh in the same time period A graphical representation for the value of X can be depicted as X04Ah X03Ah X02Ah X0Ah X0 XOAh XO 2Ah XO 3Ah X04Ah 014 a The eXpected value of AX is EAx pAhl p Ah pAh th p qAh suchthatq 1 p Similarly the variance ofAX is VAx EAxf EMY MW q M p J W E p MW p q2 M N02 p 02 N02 1 p q2 Ah2 1pq2 1p2 2pqq2 1 P22Pq qz 1p1p2pq q2 q1p2pqqz 3m 971 90 4m VAx 4 pqAh2 b Extending this result to time periodAt assuming that each event is independent yields a mean of t Ah quot gtnpqAhtpq and a variance of 2 MY 4npqAh 4m N c This distribution can be linked to a Binomial distribution Speci cally if we takeAh as a success then the Binomial distribution for the probability of k successes is PM ZJMW k 3 1 Taking the limit of the Binomial distribution derive a normal distribution with mean ut and variance 52t Thus the mean of any xxo is ut and the standard deviation is 5012 Ito s Lemma Suppose that x follows Brownian motion with parameters up what is the distribution of stochastic process y which is de ned asyfx given that fx is a nonrandom function The expression for dy is then computed by taking a Taylor series expansion of y 1 y yo f x0x2 x0 Ef xo x2 x0 2 1101 The expected value of dy Edy can then be computed as Ey y0 f x0Ex x0 f x0Ext x0 2 hot Ext x0 ut Ext x02Vxt x0u2t2 GZIMZIZ El z yoHf b O39Zf x0thot with the variance of the change in y becoming Vyy0f xo20392t This enables us to rewrite the change in y as dy f xu ozf xdt f x039 dw In a slightly more general form dy fxtu g my2 13 xtdt fxto dw 4 Geometric Brownian Motion ApplyingIto s Lemma letting XeX and assuming that X follows a Brownian motion process yields a 1 Z x 1 Z EdX e u50 e dtX u50 dt with a variance of VdX e rozdt X20251 Therefore the stochastic process can be rewritten as d X l 2 d d 7 t X M 2 c5 6 w This form of stochastic process is fairly useful because d XX is a manifestation of the rate of return on assets Thus if the rate of return on assets follows a Brownian motion process dX dt d 7 V X o 2 Then the asset value itself follows an absolute or ordinary Brownian motion process 1 dxV EO392dt039dw II Stochastic Optimal Control A The basic concepts behind Stochastic Optimal Control are stochastic calculus via Ito s Lemma and dynamic programming B Previously our equation of motion for optimal control was written as x t gtxu Now following our discussion fromDiXit sArt of Smooth Pasting we shift to a stochastic differential equation dx gt x udt 039 t x u dz where gtXu is the deterministic component of the differential equation and 5tXu dz is the stochastic portion with 5tXu being some deterministic function and dz being a Brownian motion increment Following our discussion above the stochastic differential equation can then be rewritten as y F M dy E Ega xu Fuo39tx udt Fx039t x udz C Solution T maXEI ft x udt xT m u 0 st dx gt x u dt 039t xu dz Rewriting in terms of the value function T Jt0x0 mathfdt xT m 0 Applying the BellmanJacobi framework Jtx z maXEftxuAt Jt At x Am Expanding around JtX 1 Jt Atx Ax Jxt J r xAt Jr r xAx 31 txAx2 hot Applying Ito s Lemma to the stochastic differential equation M2 g2Ar2 2goArAzozAz2 GZAI hot Substituting both of these expressions into the BellmanJacobi formulation 1 Jtx maxEfAt Jtx J2 t xAt Jr t xAt Jr t x039Az ELI tx039 2m Subtracting J from both sides and diViding through byAt and taking At to zero yields 1 12096 maXftxu Jr txgtxu 562 txuJutx Subject to the boundary condition JxT T XT T Lecture XI QuasiNewton Methods of Optimization I A Baseline Scenario A In this lecture we develop several alternatives to the NewtonRaphson algorithm As starting point I want to discuss a prototype algorithm Algorithm U Model algorithm for ndimensional unconstrained minimization Let xk be the current estimate of x the point which minimizes the objective function x Ul U N U L U 4 Test for convergence If the conditions for convergence are satis ed the algorithm terminates with xk as the solution Compute a search direction Compute a nonzero nvector pk the direction of the search Compute a step length Compute a scalar ak the step length for which xk akpk ltfxk Update the estimate of the minimum Set xk1 xk akpk le1 and go back to step Ul B Given the steps to the prototype algorithm I want to develop a sample problem that we can compare the various algorithms against Notebook Algorithm3ma includes the numeric problem 2 3 4 1 max x1 x2 x3 x4 st x1 100 2x2 3x3 x4 Using NewtonRaphson the optimal point for this problem is found in 10 iterations using 123 seconds on the DEC Alpha II An Overview of Newton and QuasiNewton Algorithms A The NewtonRaphson methodology can be used in U2 in the prototype algorithm Speci cally the search direction can be determined by p Vfx391Vfx B QuasiNewton algorithms involve an approximation to the Hesian matrix For example we could replace the Hessian matrix with the negative of the identity matrix for the maximization problem In this case the search direction would be P 1nVfxk where 1n is the identity matrix conformable with the gradient vector This replacement is referred to as the steepest descent method In our sample problem this methodology requires 990 iterations and 2928 seconds on the DEC Alpha This result highlights some interesting features regarding the QuasiNewton methods 1 N The steepest descent method requires more overall iterations In this example the steepest descent method requires 99 times as many iterations as the NewtonRaphson method Typically the time spent on each iteration is reduced Again in the current comparison each the steepest descent method requires 123 AEB 6533 7 Static and Dynamic Optimization Lecture 11 Professor Charles B Moss seconds per iteration while NewtonRaphson requires 030 seconds per iteration C Obviously substituting the identity matrix uses no real information from the Hessian matrix An alternative to this drastic reduction would be to systematically derive a matrix H k which uses curvature information akin to the Hessian matrix The projection could then be derived as Pf H lvx xk III Conjugate Gradient Methods A One class of QuasiNewton methods are the conjugate gradient methods which build up information on the Hessian matrix S From our standard starting point we take a Taylor series expansion around the point xk sk foxk S13 foOCk VXfxk Sk for some sk E Nxk5 Solving this expression for the term involving the Hessian yields foOCk Sk foxkVcfxkSk Skifoxk Skfoxk Skivixfcxk Sk Thus the change in the gradient contains the same curvature information as the original Hessian Thus yk foxk Sk foxk which amounts to yk Vixfxksk39 So we want to approximate the information in the Hessian using the some matrix Bk1 yk Bk1 Sk One way to generate Bk1 is to start with the current Bk and add new information on the current solution B M B k u v39 yk Bk uvisk where u and v are nl vectors Carrying through the vector multiplication V Skyk BkSk where v sk is a scalar Thus 1 B u VSk yk ksk Substituting for u into the expression for Bk1 yields 1 B B k V Sk yk kSk V B k1 Given this derivation a logical choice for v would be yk Bksk This approximation yields an updated Hessian of AEB 6533 7 Static and Dynamic Optimization Lecture 11 Professor Charles B Moss 1 B B B B k1 k ykBkSkySk yk kskxyk ksk This update results in a symmetric approximation to the Hessian In general any approximation to the Hessian can be made symmetric by BzlB1B1v 2 Substituting such an expression into the general updating procedure results in l y y y B s s Bk1 Bk V39sk yk Bkskv Vyk Bksk kc k VV The advantage of this formulation is that the approximation of the Hessian matrix will be symmetric for any vector v B Two prominent conjugate gradient methods are the DavidonFletcherPowell DFP update and the BroydenFletcherGoldfarbShanno BFGS update 1 In the DFP update V is set equal to yk yielding l Bk1Bk Bksksk Bki y VS ykykiskinSkwkwki k k l Sk Bk Sk l 1 Wk ykisk yk Sk BkSk 2 The BFGS update is then Bksk l Bk1Bk Bksksk Bky VS yieka k k l sk Bk sk IV A Numerical Example A Using the previously speci ed problem and starting with an identity matrix as the original Hessian matrix each algorithm was used to maximize the utility function Each of the procedures will have the same rst iteration because they have the same original Hessian matrix and analytical gradient The initial gradient for all the QuasiNewton procedures was l 0 0 B2 l 0 1 compared with an analytical gradient of 5275 2885 0718 Bquot 6085 0954 2244 This substitution implies a step of s 7337 9766 2428 Compared with an optimal step of sf43559 43476 43226 AEB 6533 7 Static and Dynamic Optimization Lecture 11 Professor Charles B Moss In discussing the difference in step I will focus on two attributes The rst attribute is the relative length of the step the 2norm If the conjugate gradient routines can be made identical with NewtonRaphson with the appropriate step length algorithm we could then focus on designing step lengths The second attribute is the direction of the step As we discussed in lecture 9 one of gains to NewtonRaphson is the re nement of the search direction through the information in the Hessian The value of this re nement can be seen in the relative gain of the NewtonRaphson over the steepest descent method Dividing each vector by its 2norm yields X1 X2 X3 X1 X2 X3 NewtonRaphson 43559 43476 43226 75207 0579191 0578087 0574763 Conjugate Gradient 07337 09766 02428 12454 0589129 0784167 0194958 1 The Rank One Approximation a Iteration l 6690 3842 1536 B 5540 1783 9287 2780 1276 0511 Bf 2501 0592 2280 X1 X2 X3 X1 X2 X3 NewtonRaphson 67466 76270 48254 112682 05987 06769 04282 Conjugate Gradient 43187 50130 20042 69136 06247 07251 02899 b Iteration 2 63 15 4256 1776 Bl 5083 2048 9134 0628 0171 0106 Bf 0612 0113 0846 X1 X2 X3 X1 X2 X3 NewtonRaphson 108541 116909 60789 170717 06358 06848 03561 Conjugate Gradient 77298 89067 37912 123876 06240 07190 03060 2 PSB a Iteration l AEB 6533 7 Static and Dynamic Optimization Lecture 11 Professor Charles B Moss 6703 3860 1504 B 5565 1827 9365 2780 1276 0511 Bf 2501 0592 2280 X1 X2 X3 X1 X2 X3 NewtonRaphson 67466 76270 48255 112682 05987 06769 04282 Conjugate Gradient 43118 50129 19948 69065 06243 07258 02888 b Iteration 2 6328 4274 1745 Bl 5109 2096 9223 0629 0171 0106 Bf 0612 0114 0850 X X2 X3 X1 x2 x3 1 NewtonRaphson 108525 117000 60686 170732 06356 06853 03554 Conjugate Gradient 77228 89130 37768 123834 06236 07198 03050 3 DFP a Iteration 1 7187 4517 0326 B 6455 3424 12232 2780 1276 0511 Bf 2501 0592 2280 X1 X2 X3 X1 X2 X3 NewtonRaphson 67466 76270 48255 112682 05987 06769 04282 Conjugate Gradient 41395 50091 17613 67327 06148 07440 02616 b Iteration 2 6788 4945 0589 Bl 6021 3766 12194 0653 0177 0119 Bl 0602 0124 0971 AEB 6533 7 Static and Dynamic Optimization Professor Charles B Moss X1 x2 x3 NewtonRaphson 108069 119291 58015 171099 Conjugate Gradient 75224 90598 33966 122557 BFGS a Iterationl 6771 3952 Bl 5690 2780 1276 Bl 2501 X1 X2 X3 67466 76270 48255 112682 42788 50121 19501 68726 b Iteration2 6391 4369 Bl 5238 NewtonRaphson Conjugate Gradient 0634 0172 Bl 0610 X1 X2 X3 108447 117436 60193 170807 76884 89427 37078 123625 NewtonRaphson Conjugate Gradient 1338 2051 9768 0511 0593 2280 1585 2333 9644 0109 0115 0871 X1 06349 06219 X1 06316 06138 X1 05987 06226 X2 06972 07392 X2 06769 07293 X2 06875 07234 Lecture 11 X3 03391 02771 X3 04282 02838 X3 03524 02999 Lecture XVI More Complicated Solutions to Differential Equations 1 Nonlinear Differential Equations A In fairly general terms we can assert that a firstorder linear differential equation has a unique solution This assertion follows our solution of the problem 56 W xgt B However solutions for nonlinear differential equations where x ftx are somewhat more difficult and lack a general solution framework For example xx3 x00 raises some additional difficulties C Two procedures for solving nonlinear differential equations 1 Separable Differential Equations a 0quot Given the general expression of a differential equation Q t dt f x an alternative way to express this relationship is d x M t N t i0 x x dt One subgroup of this class of differential equations is referred to as separable differential equations where d x M t N i0 x dt so that each term only involves t or x respectively Solving this system then involves integrating each section separately d x M t N i x W MtdtNxdx jMtdtjNxdx Example 1 3tz4t2 0 1 dt 2x l xO Separating the function 2x ldx3t24t2dt 2jz 1alzj3s2 4s2dsc X l 2 x2 x t3 2t2 2tc x2 2xt3 2t2 2tc Given that x0 l l 2 lc gtc 3 Thus the implicit form of the solution is x2 2xt3 2t2 2t3 Example 2 0 dx t2 5 Separating the equation xdx t2 dt J zalzs2 dsc x t 1 2 t3 i 2 x 3 c 2 x2 it3 c 3 2 Exact Differential Equations a A second useful technique starts from a general equation W I x 0 taking the total differential of this equation with dc 0 we obtain v ax 61mm xdx 0 or d x Wtxllxtx0 Obviously we see that since wrxdr w rxdxdvrx V t x his 3 xds t or wtxjwztzdz Spotting this symmetry between the terms allows for the solution of these problems b Example 1 d x 2tx3 3t2 x2 i0 dt which implies that V t x t2 x3 or tzx3 cgtxttic In general this form can be spotted by Young s theorem Speci cally i L L 1 dt dxwa x dx dtwa x In the above scenario 0 d 3 EV t x 2 t x thus d 2 Ewttx6tx To confirm the exact nature of the solution d 2 2 a t x 3t x Implying that EV tx6tx2 II Second and Higher Order Linear Differential Equations A The general nth degree differential equation can be written as do t x all t xl39 61 1 t x a0 t x gt where Km dnxdtquot Introducing the general differential operator Di didti d 2 x d t D2 xt 56 D1 I d x x i x dt D xtx The nth degree linear differential equation can then be written as a0rD arDquot 1arD1arxrgr LDxtgt B Solving for the Complementary Solution 1 Following the firstorder solution we would posit that the solution of an nth order linear differential equation would be some variant of XteXpl Taking a second order differential equation D2 bDcxj bxc0 D2 xkz ell Dx7t eM x em Substituting these values back into the differential equation yields ehof bkc0 Because expO equal 0 73 13 c0 k b i 1 b2 4a 2 a Example 1 Starting with a example with constantaits 56 2 5c x The characteristic equation for the problem becomes 73 2 7M 3 0 Thus there are two possible solutions to this problem xtet gtjce 5c et x2x 3 xe39 2e 3e39 0 or xe 3 gtx 3e 3 5 9e 3 562x 3x9e 3 6e 3e 0 b Thus this problem has two solutions Among other things we are interested in whether these solutions are independent The independence of the solutions is determined by theWronskin x t x t W 10 20 x10 9 I em ex 118112 1128112 zekltkzekzt ekz leklt zeal kl 7 1 l l 7a 12 equotll 7t In the current example 1 1 Wtolt1 3 3 1 4 0 so the solutions are independent xtcle 0264 2 In general the characteristic equation can have three solutions a The characteristic equation has two distinct real roots as demonstrated above b The characteristic equation has two identical roots in which case the complementary function becomes x5 I A1 AzteM c The characteristic function has imaginary roots in which case the solution becomes x t 60 A cosB t8 0c Rea B W 1mk 8 tan 1B1 where B1 and B2 are functions of the arbitrary constants C Particular Integrals 1 Method of Undetermined Coefficients a The method of undetermined coefficients is best described as the method of good guesses The simplest way to describe this method is by demonstration Assume that you have the differential equation 56 x 2 x 2 t A good guess of the particular solution would be x p t a t b xp t at 56p t 0 j jc 2x0a 2atb2t 2ata 2b2t This solution implies 2at2tgta l a 2 b 0 gt 1 Checking the solution xp t t xp t l 5 t 0 56x 2x0 l 2 t 2t b Example 2 256 4x 6x 36 guess xp t A e cp t 2 A e 5 t 4A e 24Ae 42 Ae 6A e 3e 8 A 8 A 6A 3 l A 2 2 Laplace Transforms De ned a One tool in the solution of linear differential equations is the concept of an integral transform An integral transform is a relationship of the form 3 FsjKst ftdt where the function ft is transformed into another function Fs by means of the integration The function F is the transform of f with k being referred to as the kernel of the transformation The general idea is to transform f into a simpler problem F which can be more easily solved Afterwards the original solution f can be recovered The problem is then to choose an appropriate kernel and bounds of integration One possibility for the selection of the kernel was proposed by Laplace The Laplace transpose uses kst eXpst yielding Lft Fs je 51 dt 0quot This solution is particularly associated with linear differential equations with constant coefficients Since the Laplace transformation is defined as the proper integral from zero to infinity it is often useful to introduce the limiting concept lftdtlglfrdt i Example 1 w A L1Je dtlliinje dt 0 0 A l lim i e Agtw S 0 l l lim 6 Agtw S S l 3 ii Example 2 ftxpat LeazZjermeazdtzjerma 0 0 L groin S a 0 l s a c In order to use the Laplace transform we note without proof that Lf tlSLftl f0 Similarly theLaplace transform of a second order differential equation is Lf tSzLftl Sf0 f 0 Thus a second order differential equation y y 2y 0 can be written as Ly Ly 2Lyl 0 my m0 y 0l sLy y0l 2 My 0 Given the initial conditions y0l and y 00 this expression can be written as S2 S 2YS 1 Sy0 y 00 s l s 2s 1 Solving this problem then involvesfmding the function whose Laplace transform can be expressed as Ys Ys Lecture XXIV Alternative End Point Restrictions and Dynamic Programming 1 The Salvage Value Problem A The problem involves a value to be assigned to a state variable at the end of the planning horizon B Mathematically 31 maxjfrxudr xr in st W grxu x00 x0t1t2 xed x01 free 1 The standard control formulation becomes 31 31 1 f kg x xdt ktxt tn Ixt1 tn 2 Hypothesizing alternative paths ut u t a ht 31 ma Igrxudr tn Ja Iftytau t a ht ktgtytau t a ht M tyt adt Mt1ytpa Momma yt1a taking the derivative with respect to a and letting a go to zero yields 110 fr ng a ya M fa xguhtdr MEDaI150 yt1ayatpa which yields the traditional optimality conditions plus Mt xr 1yra 0 s Mt1 cum 11 Fixed Endpoint Problems A Suppose both the initial and terminal points are xed as in the rst formulation of the calculus of variations problem maxJLftxudt st x t gtxu x00 x0 x01 x1 to t1 free 1 Let u be the optimal control function and X be the corresponding state function Let u denote another feasible function satisfying the equation of motion and terminal conditions J J AJ Iftxu7tgtxuk x ftxu 7tgtxu 7t xdt 2 using a Taylor series expansion A kg 7 xxfu kguXu udthot 31 n Labeling 5XXX and Suuu as perturbations 31 8 7th 7 6x fa kgu6udt in 2 Similar to before the optimal path must satisfy f 7g 7 0 for any fix The difference is that 5u must now be chosen to satisfy the terminal conditions at both sides Kamien and Schwartz show in an appendix that if there is a feasible control u that forces the state from Xt0X0 to Xt1X1 then the coefficient on 5u must be zero f Xg 0 when 7 obeys k fx7tgx 3 Our inventory problem then becomes T min clut2 czxtdt st x t ut x0 0 33 ut 2 0 The Hamiltonian then becomes Htxu7t clut2 c2xt 7 tut DeriVing the costate condition 3 H E c2 k t gtkt cztdl Using this result in the optimality condition 2 H a 201ut Mt 0 gt 201ut cit 611 0 c d ut it i1 201 201 Lastly using the equation of motion at it i x2c1 2c1 cl 2 d1 t it itd x 4C 201 2 d x002 j0d2 0gtd2 0 1 1 c2 2 d1 czT 2Bc1 xT i TBgtdli i 4 2c1 2 T Therefore the optimal path for the state variable becomes cit2 t CZT 21301 xt i i 7 401 201 2 T cit2 th2 a 4c1 4c1 T cztt T Br 7 401 T The production relationship can then be written as c2 2t T Br u t 7 401 T The marginal value of inventory at any point in time is then given by the costate variable c T 230 k t t 2 71 c 2 T 111 Various Endpoint Conditions A A Very General Problem maX jftxudt 101 x01 subject to 1 Standard equations of motion xl t g txu 139 1n 2 Initial conditions x100 x10 139 1n 3 Terminal conditions a Fixed endpoints x101 xl1 xed 139 1q b Free terminal values x101 free 139 q 1r c Inequality constraints x101 2 0139 rls d Endpoints obeying some function Kxs1xnt 2 0 B Solution of the general problem JIUkx WMHJGD Integrating by parts n 1 kg x xdt lt1xt1 we an Mtgxal Assuming an optimal pathxi u and the optimal functions J fk gquot and V Let X and u be feasible paths satisfying the constraints 5lttltt15t1 with values J f g and 1 Assuming the same 7 31 J Jquot Jf7tg7t x f 7tg k xdtl in 21le m1 xt1 xt1 j ftxudt 31 Applying a Taylor expansion 31 51 7ng 7 h fa lg 5udt x5x1 t1 in 1t1ht1 ft1 axau5t1 ha Sui ul ul 5x1 xt15t1 965 1101 x01 xii t1 5 5x1 xt15t1 5x1 gt1 axau5t1 Collecting terms 6 ng 1 hfu kgu5udt x ktl6x1 f7tg t215tl s 0 which implies 1 Our standard optimality conditions a The costate or multiplier condition k fx7 gx or in scalar form 3f M 9amp 7 k 39 1 A 3 x1 g f 3 x1 I n b The optimality condition f kg 0 or in scalar form


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Amaris Trozzo George Washington University

"I made $350 in just two days after posting my first study guide."

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.