### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# INTRONUMANALYS&COMPUT MATH541

SDSU

GPA 3.64

### View Full Document

## 12

## 0

## Popular in Course

## Popular in Mathematics (M)

This 136 page Class Notes was uploaded by Miss Gladys Lubowitz on Tuesday October 20, 2015. The Class Notes belongs to MATH541 at San Diego State University taught by Staff in Fall. Since its upload, it has received 12 views. For similar materials see /class/225276/math541-san-diego-state-university in Mathematics (M) at San Diego State University.

## Reviews for INTRONUMANALYS&COMPUT

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/20/15

Numerical Analysis and Computing Lecture Notes 02 Calculus Review Computer Artihmetic and Finite Precision Algorithms and Convergence Solutions of Equations of One Variable Peter Blomgren blomgren peter gmail com Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research Center San Diego State University San Diego CA 9218277720 httpterminussdsuedu Fall 2009 535 Lecture Notes 02 7 153 Outline 0 Calculus Review 0 Limits Continuity and Convergence 0 Differentiability Rolle39s and the Mean Value Theorem 0 Extreme Value Intermediate Value and Taylor39s Theorem 9 Computer Arithmetic 84 Finite Precision a Binary Representation IEEE 75471985 0 Something39s Missing 0 RoundofT and Truncation Errors Digits 0 Cancellation e Algorith ms 0 Algorithms PseudoeCode 0 Fundamental Concepts 0 Solutions of Equations of One Variable f X 0 Root Finding 0 The Bisection Method 0 When do we stop 0 Homework 1 Lecture Notes 02 7 253 Why Review Calculus It39s a good warm up for our brains When developing numerical schemes we will use theorems from calculus to guarantee that our algorithms make sense If the theory is sound when our programs fail we look for bugs in the code Calculus Review 7 353 Background Material A Crash Course in Calculus Key concepts from Calculus Limits Continuity Convergence Differentiability Rolle39s Theorem Mean Value Theorem Extreme Value Theorem Intermediate Value Theorem Taylor s Theorem Calculus Review 7 453 Limit Continuity Definition Limit A function 2 defined on a set X of real numbers X C R has the limit L at X0 written im fx L xaxo if given any real number s gt 0 Vs gt 0 there exists a real number 6 gt O 36 gt 0 such that ifx 7 Li lt 6 whenever X E X and O lt iX7X0i lt 6 Definition Continuity at a point Let f be a function defined on a set X of real numbers and X0 6 X Then 2 is continuous at X0 if XILnlo fx fx0i Calculus Review 7 553 Example Continuity at X0 Here we see how the limit x a X0 where X0 05 exists for the function fx X i sin27rx Calculus Review 7 553 Examples Jump Discontinuity The function 1 x X i gsin2n39x X lt 05 x sm27rxl Xgt05 has a jump discontinuity at X0 05 m 7 753 Calculus Review Examples Spike Discontinuity The function The limit exists but 1 x05 Iim fx07 1 fX 0 X 7g 05 x405 has a discontinuity at X0 05 5351 Calculus Review 7 353 Continuity Convergence Definition Continuity in an interval The function f is continuous on the set X f E CX if it is continuous at each point X in X Definition Convergence of a sequence Let X Xn1 be an infinite sequence of real or complex numbers The sequence X converges to X has the limit X if V6 gt 0 3Ne 6 2 M 7 xi lt e Vn gt Ne The notation im Xn X naoo means that the sequence Xn1 converges to X E Calculus Review 7 953 Illustration Convergence of a Complex Sequence 2 o 3 O 4 O o N2 0 kgtN o o Nl 0 O O O A sequence in zk il converges to 20 E C the black dot if for any 6 the radius of the circle there is a value N which depends on 6 so that the tail of the sequence A zk iN is inside the circle m Calculus Review 7 1053 Differentiability Theorem Iff is a function defined on a setX of real numbers and X0 6 X the the following statements are equiva en a f is continuous atX b If Xn 21 is any sequence in X converging to X0 then limnaoo fXn fq Definition Differentiability at a point Let f be a function defined on an open interval containing X0 9 lt X0 lt b f is differentiable at X0 i f 7 f fXoxlin391lt W exists a 0 7 0 If the limit exists f q is the derivative at X0 Definition Differentiability in an interval If fXo exists on 6 X then f is differentiable on X m Calculus Review 7 1153 Illustration Differentiability Here we see that the limit m 1 00 a axe xaxo X 7 X0 exists and approaches the slope derivative at X0 f x0 m Calculus Review 7 1253 Continuity Rolle39s Theorem Theorem Differentiability Continuity Iff is differentiable at X0 then f is continuous at X0 Theorem Rolle39s Theorem W ermk Suppose f E Ca7 b and that f is differentiable on a7 b Iffa fb then 3c 6 a7 b f c O Calculus Review 7 1353 Mean Value Theorem Theorem Mean Value Theorem W erwk ff 6 Ca7 b and f is differentiable on 37 b then 36 6 37 b fC T 3 Calculus Review 7 1453 Extreme Value Theorem Theorem Extreme Value Theorem WWLWK ff 6 Ca7 b then 361 2 6 37 b fcl fx og VX 6 37 b ff is differentiable on 37 b then the numbers 6162 occur either at the endpoints ofa7 b or Where f x 0 Calculus Review 7 1553 Intermediate Value Theorem Theorem Intermediate Value Theorem W ermk ff 6 Ca7 b and K is any number between fa a d fb then there exists a number 6 in 37 b for which fc K Calculus Review 7 1553 Taylor39s Theorem Theorem Taylor39s Theorem W erwk Suppose f E C a7 b f 13 on 37 b andxo 6 37 b Then VX 6 37 b 35X E X07X With fx Pnx Rnx Where n fkXo k f 1 x 7 7 n1 Pnx k k x x0 Rnx n 1 x x0 Pnx is called the Taylor polynomial of degree n and Rnx is the remainder term truncation error This theorem is extremely important for numerical analysis Taylor expansion is a fundamental step in the derivation of many of the algorithms we see in this class and in Math 693ab m Calculus Review 7 1753 Illustration Taylor39s Theorem fx sinx P5x P9X P13X 1 3 1 5 1 7 1 9 1 11 1 13 P13X77 X Fix fix Fax imx k x P5x P900 Calculus Review 7 1353 Taylor Expansions Matlab O A Taylor polynomial of degree n requires all derivatives up to order n and order n l 1 for the remainder 0 Derivatives may be more complicated expression than the original function 0 Matlab can compute derivatives for you Matlab Symbolic Computations Try this gtgt SYHIS X gtgt diffsin2x gtgt diffsin2x3 gtgt taylorexpx 5 gtgt taylorexpx 51 Calculus Review 7 1953 Computer Arithmetic and Finite Precision Computer Arithmetic and Finite Precision 5351 Computer Arilhmelic amp Finite Precision 7 2053 Finite Precision A single char Computers use a finite number of bits 039s and 139s to represent numbers For instance an 8 bit unsigned integer aka a char is stored nnn Here 26 i 23 i 22 i 20 64 i 8 i 4 i 1 77 which represents the upper case character M US ASCII 5351 Computer Arilhmelic amp Finite Precision 7 2153 Finite Precision A 64 bit real number double The Binary Floating Point Arithmetic Standard 754 1985 IEEE The Institute for Electrical and Electronics Engineers standard specified the following layout for a 64 bit real number SC10C9 c1c0m51m50 m1m0 Where Symbol Bits Description 5 1 The sign bit Opositive 1negative c 11 The characteristic exponent m 52 The mantissa 10 51 m r 71S2C 1023 1 m7 czck2k7 m 2525k k0 k0 m Burden Faires39 Description is not complete As described in previous slide we cannot represent zero There are some special signals in lEEE 754 1985 Type S 1 bit C 11 bits M 52 bits signaling NaN u 2047 max uuuuuiu with at least one 1 bit quiet NaN u 2047 max 1uuuuuiu negative infinity 1 2047 max 00000040 positive infinity 0 2047 max 00000040 negative zero 1 0 00000040 positive zero 0 0 00000040 From http www freesoft orgCIERFC183232 htm 5351 Computer Arithmetic amp Finite Precision 7 2353 Examples Finite Precision 10 mk r 715 2C 1023 1 1 7 c Zc k m Z 252 k0 0 Example 1 30 0 10000000000100000000000000000000000000000000000000000000000000 1 3 r1 710221 1023 1 E 121E 30 Example 2 The Smallest Positive Real Number 0 00000000000 000000000000000000000000000000000000000000000000001 r2 710I2071023 I 1 2752 12752I271023I1W107308 Computer Ariihmelic amp Finite Precision 7 2453 Examples Finite Precision 10 51 r 7152C 1023 1 1 7 c Zc k m Z 2fk k0 k0 Example 3 The Largest Positive Real Number O11111111110111111111111111111111111111111111111111111111111111 1 1 1 1 r3 inn21023lt1 m gt 21023 I 7 m10308 5351 Computer Ariihmelic amp Finite Precision 7 2553 Something is Missing 7 Gaps in the Representation 1 of 3 There are gaps in the floating point representation Given the representation 0 00000000000 000000000000000000000000000000000000000000000000001 271023 for thevalue 252 The next larger floating point value is 0 00000000000 0O00000000000000O0000000000000000000000000000000010 271023 ie the value 251 The difference between these two values is 222223 2 1075 271023 271023 Any number in the interval lt 252 7 251 gt is not representable Computer Arithmetic amp Finite Precision 7 2553 Something is Missing 7 Gaps in the Representation 2 of 3 A gap of 2 1075 doesn39t seem too bad However the size of the gap depend on the value itself Consider r 30 0 10000000000100000000000000000000000000000000000000000000000000 and the next value 0 10000000000100000000000000000000000000000000000000000000000001 2 The difference is E z 44 10716 5351 Computer Arithmetic amp Finite Precision 7 2753 Something is Missing 7 Gaps in the Representation 3 of 3 At the other extreme the difference between O11111111110111111111111111111111111111111111111111111111111111 and the previous value O11111111110111111111111111111111111111111111111111111111111110 21023 252 That39s a fairly significant gap is 2971 z 199 10292 The number of atoms in the observable universe can be estimated to be no more than N 10 0 5351 Computer Arithmetic amp Finite Precision 7 2353 The Relative Gap It makes more sense to factor the exponent out of the discussion and talk about the relative gap Exponent ap Relative Gap GapExponent 1023 271075 2752 1 2751 2752 21023 2971 2752 Any difference between numbers smaller than the local gap is not representable eg any number in the interval 1 30 30 2 is represented by the value 30 5351 Computer Arilhmelic amp Finite Precision 7 2953 The Floating Point Theorem Theorem Floating point numbers represent intervals Since most humans find it hard to think in binary representation from now on we will for simplicity and without loss of generality assume that floating point numbers are represented in the normalized floating point form as k digit decimal machine numbers i0d1d2dk1dk10 where 5351 Computer Arilhmelic amp Finite Precision 7 3053 k Digit Decimal Machine Numbers Any real number can be written in the form i0d1d2doo 10 given infinite patience and storage space We can obtain the floating point representation flr in two ways 1 Truncating chopping just keep the first k digits 2 Rounding if dk1 2 5 then add 1 to dk Truncate Examples flt57r 031415 101 fl57r 031416 101 In both cases the error introduced is called the roundoff error 5351 Computer Arilhmelic amp Finite Precision 7 3153 Quantifying the Error Let p be and approximation to p then Definition The Absolute Error lp 7 P l Definition The Relative Error per l lpl l p Definition Significant Digits The number of significant digits is the largest value oft for which 7 lP P l lt5391071 lpl JR Computer Arilhmelic amp Finite Precision 7 3253 Sources of Numerical Error Important 1 Representation RoundofF 2 Cancellation Consider 012345678012345101 7 012345678012344101 010000000000000 10 13 this value has at most 1 significant digit If you assume a canceled value has more significant bits the computer will happily give you some numbers I don39t want you programming the autopilot for any airlines 5351 Computer Arilhmelic amp Finite Precision 7 3353 Examples 5 digit Arithmetic Rounding 5 digit arithmetic 96384 26678 7 96410 96384 00027 7 96410 964117 96410 10000 Truncating 5 digit arithmetic 96384 26678 7 96410 96384 00026 7 96410 96410 7 96410 00000 Rearrangement changes the result 96384 7 96410 26678 726000 26678 067800 Numerically order of computation matters This is a HARD problem Computer Arilhmelic amp Finite Precision 7 3453 Example Loss of Significant Digits due to Subtractive Cancellation Consider the recursive relation i 1 Xn117n1xn w1th X017 This sequence can be shown to converge to 0 in 2 slides Subtractive cancellation produces an error which is approximately equal to the machine precision times nl 5351 Computer Arilhmelic amp Finite Precision 7 3553 Subtractive Cancellation Example Output n n39 n n39 0 0 63212056 1 11 0 07735223 3 99e007 1 0 36787944 1 12 0 07177325 4 79e008 2 0 26424112 2 13 0 06694778 6 23e009 3 0 20727665 6 14 0 06273108 8 72e010 4 0 17089341 24 15 0 05903379 1 31e012 5 0 14553294 120 16 0 05545930 2 09e013 6 0 12680236 17 0 05719187 3 56e014 7 0 11238350 5 04e003 18 7002945367 6 4e015 8 0 10093197 4 03e004 19 1 55961974 1 22e017 9 0 09161229 3 63e005 20 73019239489 2 43e018 10 0 08387707 3 63e006 Computer Ariihmelic amp Finite Precision 7 3553 Example Proof of Convergence to O The recursive relation is Xn1 17 n1xn with 1 1 1 1 m1717jaimm From the recursive relation 1 1 1 1 x1 7X0 57 E7m 1 2 1 2 2 m ta 3 3 339 X3 173x EiaJrai I I I Xn linxn71n7 n n ww wwww This shows that 1 77 0 V Xn n1 n1n2 4t as naoo m Computer Arithmetic amp Finite Precision 7 3753 Example Loss of Significant Digits Matlab code Matlab code Loss of Significant Digits clear x1 1 1exp1 51 1 f1 1 for i 221 xi 1i1xi1 si 1i fi i 1fi 1 end 11 020 z n x s f fprintf1 nn n xn 1n1 nnn fprintf1 quot7020f 70138f 70108f 103gn z 30 Computer Ariihmelic amp Finite Precision 7 3353 Homework 1 Due Friday 9182009 12 noon part 1 More to be assigned Burden Faires 1241291219a 5351 Computer Ariihmelic amp Finite Precision 7 3953 Algorithms Definition Algorithm An algorithm is a procedure that describes in an unambiguous manner a finite sequence of steps to be performed in a specific order In this class the objective of an algorithm is to implement a procedure to solve a problem or approximate a solution to a problem Most homes have a collection of algorithms in printed form we tend to call them recipes There is a collection of algorithms out there called Numerical 39 39 l ReCIpes google for It m Algorithms and Convergence 7 4053 Pseudo code Definition Pseudo code Pseudo code is an algorithm description which specifies the inputoutput formats Note that pseudo code is not computer language specific but should be easily translatable to any procedural computer language Examples of Pseudo code statements for i 12 n Set Xaih While ilt N do Steps 17 21 If then else Algorithms and Convergence 7 4153 Key Concepts for Numerical Algorithms Stability Definition Stability An algorithm is said to be stable if small changes in the input generates small changes in the output At some point we need to quantify what small means If an algorithm is stable for a certain range of initial data then is it said to be conditionally stable Stability issues are discussed in great detail in Math 543 Algorithms and Convergence 7 4253 Key Concepts for Numerical Algorithms Error Growth Suppose E0 gt 0 denotes the initial error and En represents the error after n operations If E z CEO n for a constant C which is independent of n then the growth is linear If E z CnEo C gt 1 then the growth is exponential in this case the error will dominate very fast undesirable scenario Linear error growth is usually unavoidable and in the case where C and E0 are small the results are generally acceptable Stable algorithm Exponential error growth is unacceptable Regardless of the size of E0 the error grows rapidly Unstable algorithm 5351 Algorithms and Convergence 7 4353 Example BF 133 1 of 2 The recursive equation 10 pnPnilipn727 2737quot397OO has the exact solution 1 n Pn C1 g 623quot for any constants c1 and Cg Determined by starting values In particular if p0 1 and p1 we get 1 1 and 2 0 so pn for all n Now consider what happens in 5 digit rounding arithmetic Algorithms and Convergence 7 4463 Example BF 133 2 of 2 Now consider what happens in 5 digit rounding arithmetic p3 100007 pf 033333 which modifies cf 100007 6 701250010 5 The generated sequence is p 10000 033333quot 701250010530000quot Exponential Growth p quickly becomes a very poor approximation to pn due to the exponential growth of the initial roundofFerror Algorithms and Convergence 7 4553 Reducing the Effects of RoundofF Error The effects of roundofF error can be reduced by using higher order digit arithmetic such as the double or multipleprecision arithmetic available on most computers Disadvantages in using double precision arithmetic are that it takes more computation time and the growth of the roundoff error is not eliminated but only postponed Sometimes but not always it is possible to reduce the growth of the roundofF error by restructuring the calculations Algorithms and Convergence 7 4553 Key Concepts Rate of Convergence Definition Rate of Convergence Suppose the sequence g n il converges to zero and g an 1 converges to a number a If 3K gt O lan foal lt KB for n large enough then we say that an 1 converges to a with a Rate of Convergence Own Big Oh of n We write an a 0 n Note The sequence g 5n 1 is usually chosen to be n i for some positive value of p Algorithms and Convergence 7 4753 Examples Rate of Convergence Example 1 If 1 then for any 6 gt 0 hence 1 aquot a O W Algorithms and Convergence 7 4353 Examples Rate of Convergence Example 2 Consider the sequence as n A 00 lt1 1 ansln if n n We Taylor expand sinX about X0 O 1 1 1 1 lt0 W m0lt gt Hence 1 1 lanl It follows that 1 an 0 O Note 1 1 1 1 1 OltFgtOltFgt OltFgt sInce FltltF7 as 397 A 00 5351 Algorithms and Convergence 7 4953 Generalizing to Continuous Limits Definition Rate of Convergence Suppose Iim Ch O7 and lim Fh L h0 h0 f3KgtOz iFhLi S KiGhi Vh lt H for some H gt 0 then Fh L OGh we say that Fh converges to L with a Rate of Convergence OGh Usually Ch 7 p gt 0 Algorithms and Convergence 7 5053 Examples Rate of Convergence Example 2 b Consider the function ah as h a 0 ah sin h 7 h We Taylor expand sinx about X0 0 3 sinhhi0h5 Hence lahl 0 P It follows that 7 3 A oz h 7 0 O h Note 9 h3 l O h5 O h37 since h5 lt h37 as h a O m Algorithms and Convergence 7 5153 Homework 1 Due Friday 9182009 12 noon part 1 Burden Faires 124 129 1219a part 2 Burden Faires 132 137 Algorithms and Convergence 7 5253 Solutions of Equations of One Variable Our new favorite problem fX 0 Solutions of Equation of One Variable 7 5353 Solutions of Equations of One Variable Introduction We are going to solve the equation fx 0 Le finding root to the equation for functions f that are complicated enough that there is no closed form solution andor we are too lazy to find it In a lot of cases we will solve problems to which we can find the closed form solutions we do this as a training ground and to evaluate how good our numerical methods are Solutions of Equation of One Variable 7 5453 The Bisection Method 1 of 4 Suppose f is continuous on the interval ao7 b0 and ag fb0 lt O This means the function changes sign at least once in the interval The intermediate value theorem guarantees the existence of c E ao7 b0 such that fc 0 Without loss of generality just consider the function 7fx we can assume for now that fao lt 0 We will construct a sequence of intervals containing the root c 307 b0 3 317 b1 3 D 3W1 bnil D an bn 3 C Solutions of Equation of One Variable 7 5553 The Bisection Method 2 of 4 The sub intervals are determined recursively Given ak1bk1 et mk be the mid point If fmk 0 were done otherwise mb iffmlt0 akibk 31775 if fm gt 0 This construction guarantees that ak fbk lt O and C E 3k7 bk Solutions of Equation of One Variable 7 5553 The Bisection Method 3 of 4 After n steps the interval an7 bn has the length 1 n bn e an 3 who 7 sow an bn quot7quot T we can take as the estimate for the root 6 and we have 1 n1 C mn1 j d 7 dn b0 7 30 Solutions of Equation of One Variable 7 5753 The Bisection Method 4 of 4 Convergence is slow At each step we gain one binary digit in accuracy Since 10 1 x 2 33 it takes on average 33 iterations to gain one decimal digit of accuracy Note The rate of convergence is completely independent of the function 1 We are only using the sign of f at the endpoints of the intervals to make decisions on how to update By making more effective use of the values of f we can attain significantly faster convergence First an example 5351 Solutions of Equation of One Variable 7 5353 The Bisection Method Example 1 of 2 The bisection method applied to with 307 be 157 20 and ag7 fb0 704350700907 gives k 3k bk mk1 f mk1 0 1 5000 2 0000 17500 0 2184 1 1 7500 2 0000 18750 0 0752 2 1 8750 2 0000 19375 0050 3 1 8750 1 9375 19062 0 0358 4 19062 19375 19219 0 0156 5 19219 19375 19297 0 0054 6 19297 19375 19336 00002 7 19336 19375 19355 00024 8 19336 19355 19346 00011 9 19336 19346 19341 00004 m Solutions of Equation of One Variable 7 5953 The Bisection Method Example 2 of 2 Solutions of Equation of One Variable 7 5053 The Bisection Method Matlab code Matlab code The Bisection Method 397 WARNING This example ASSUMES that falt0ltfb x 1500012 f inhne x2 2rsinx x a 15 b 20 for k 09 plot xfx ki 1inewidth 2 axis145 205 705 15 gri on hold on plota b fa b ko 1inewidth 5 hold off m ab2 if fm lt 0 a m else b m end pause print depsc bisec int25trk eps f1 end 50 Solutions of Equation of One Variable 7 6163 Stopping Criteria When do we stop We can 1 keep going until successive iterates are close lmk1 mkl lt 5 or 2 close in relative terms lmk1 mkl lt 6 lmk1l or 3 the function value is small enough lfmk1l lt 6 No choice is perfect In general where no additional information about if is known the second criterion is the preferred one since it comes the closest to testing the relative error m Solutions of Equation of One Variable 7 5253 Homework 1 Due Friday 9182009 12 noon part 1 Burden Faires 124 129 1219a part 2 Burden Faires 132 137 Solutions of Equation of One Variable 7 5353 M Peter Blomgren blomgren peter gmail com Department of Mathematics and Statistics y p namicai Systems Grou PelerBlom v nuwxsmw 7 mm o Chebyshev Polynomials 0 Orthogonal Polynomials 0 Chebyshev Polynomials Intro 84 Definitions 0 Properties 9 Least Squares redux 0 Examples 0 More than one variable No problem Peter Blom ev Polynomial 8 Le l Squa Chehyshev Polynomials Le Square ream Orthogonal Polynomials A Quick Summary So far we have seen the use of orthogonal polynomials can help us solve the normal equations which arise in discrete and continuous least squares problems without the need for expensive and numerically difficult matrix inversions The ideas and techniques we developed ie Gram Schmidt orthogonalization with respect to a weight function over any interval have applications far beyond least squares problems The Legendre Polynomials are orthogonal on the interval 711 with respect to the weight function WX 1 One curious property of the Legendre polynomials is that their roots all real yield the optimal node placement for Gaussian quadrature Peter Blomgren Chehyshev Polynomials e Least Squares redux 7 345 Chehyshev Polynomials Leas Squ any The Legendre Polynomials Background The Legendre polynomials are solutions to the Legendre Differential Equation which arises in numerous problems exhibiting spherical symmetry dzy dy 172772i H 1 0 z N XdX2 xdx y e or equivalently d dy 71727 6106N dxl Xdxly 6 Applications Celestial Mechanics Legendre39s original applica tion Electrodynamics etc 5351 Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 445 Chehyshev Polynomials Le Square ream Other Orthogonal Polynomials quotOrthogonal polynomials have very useful properties in the solution of mathematical and physical problems They provide a natural way to solve expand and interpret solutions to many types of important differential equations Orthogonal polynomials are especially easy to generate using Gram Schmidt orthonormalization quot quotThe roots of orthogonal polynomials possess many rather surprising and useful properties quot http me w rm n 1fram n mm mm Chehyshev Polynomials e Least Squares redux 7 545 Peter Blomgren Back round The Laguerre polynomials are solutions to the Laguerre differential equation d2 dy 7 17 7 0 de2 xdx y They are associated with the radial solution to the Schrodinger equation for the Hydrogen atoms electron Spherical Harmonics l l Chebyshev Polynomials Leas Squas any More Orthogonal Polynomials Background Polynomials Interval wx Chebyshev lst 711 lxl 7 x2 Chebyshev 2nd 711 x17 x2 Gegenbauer 711 17 gt2Dquot12 Hermite 70000 e A2 Jacobi 711 17 XD 1 X Legendre 711 1 Laguerre 0700 e Laguerre assoc 0700 xke x Today we39ll take a closer look at Chebyshev polynomials of the first kind These are the Hermite orthogonal polynomials not to be confused With the Hermite interpolating polynomials m Peter Blomgren Chebyshev Polynomials 8 Least Squares rediix 7 745 m Chebyshev Polynomials Z Least Squares redux an Iiquot1ll39 I Vim byshev Polynomials Intro 84 Definitions lili f Chebyshev Polynomials The Sales Pitch 1 1 t2 t 1 T dt quot2 2m 1 2tz t2 W Chebyshev Polynomials are used to minimize approximation error We will use them to solve the following problems 1 Find an optimal placement of the interpolating points X07X177Xn to minimize the error in Lagrange interpola tion 2 Find a means of reducing the degree of an approximating poly nomial with minimal loss of accuracy m Peter Blomgren ltblomgrenpetergmailcom Chebyshev Polynomials 84 Least Squares redux 845 Cllehyshev Polynomials Lo Square roam ii i u ll or Polynomials Intro 8 Definitions Chebyshev Polynomials Definitions The Chebyshev polynomials Tnx are orthogonal on the interval 711 With respect to the weight function WX 117 X27 ie mm mm 2 1 TiXTjX WXdx am Peter Blonlgren Cllehyshev Polynomials 8 Least Squares redux 7 945 Chehyshev P I LezstSqi y u v v u u r ghehyshev Polynomials Intro 8 Definitions Chebyshev Polynomials Definitions The Chebyshev polynomials Tnx are orthogonal on the interval 711 With respect to the weight function WX 117 X27 ie mm mm 2 1 waxy wxdx am We could use the Gram Schmidt orthogonalization process to find them but it is easier to give the definition and then check the properties Peter Bloingren Chehyshev Polynomials e Least Squares redux 7 945 Chehyshev Polynomials Least quas39es redux AM ismmmmmwi iil39li l ipwa mumn etni tAiio39n s labimnmmig g The Chebyshev polynomials Tnx are orthogonal on the interval 711 With respect to the weight function WX 117 X27 ie mm mm 2 1 waxy wxdx am We could use the Gram Schmidt orthogonalization process to find them but it is easier to give the definition and then check the properties Dammit EZth Pcelymmieis For X 6 711 define Tnx cosnarccosx7 Vn 2 0 Note T0X cos0 17 T1X X Peter Blomgren i a u My my Chehyshev Polynomials Least Sqquot any vi v u u yr Chehyshev Polynomials Intro 8 Definitions 7 Chebyshev Polynomials Tn We introduce the notation t9 arccosx and get Tn0x E Tn0 cosnt97 where 9 6 0m Square redux Chehyshev Polynomials Le l H or onnoiniaIs Intro 44 Definitions as quates let uv Chebyshev Polynomials Tnx n 7 We introduce the notation l9 arccosx and get Tn0x E Tn0 cosnl97 where 0 6 On We can find a recurrence relation using these observations Tn10 cosn 10 cosn0 cos0 7 sinn0 sin0 Tn10 cosn 71W cosn0 cos0 sinn0 sin Tn10 Tn10 2cosn0 cos0 vv Peter Blomgrel Chehyshev Polynomials 8 Least Squares redux 7 nu45 Cllehysllev Polynomials Le l ll onnoiniaIs Intro 44 Definitions as quates uv Chebyshev Polynomials Tnx n 7 We introduce the notation l9 arccosx and get Tn0x E Tn0 cosnl97 where 0 6 On We can find a recurrence relation using these observations Tn10 cosn 10 cosn0 cos0 7 sinn0 sin0 Tn10 cosn 710 cosn0 cos0 l sinn0 sin Tn10 l Tn10 2cosn0 cos0 vv Returning to the original variable X we have Tn1x 2X cosnarccos X 7 739r1x7 Tn1x 2xTnx 7 Tn1x m Peter Blomgrel Cllehysllev Polynomials 8 Least Squares redux 7 nu45 Chehyshev Polynomials srum duv 4 a mu ghehyshev Polynomial Iner 8 Definitions The Chebyshev Polynomials V V 4 H Che39w quotg39yquot f ghehyshev Polynomial Iner 44 Definitions The Chebyshev Polynomials J T1x 7 3 6 T2x 0 5 i i 0 7 7 05 ghehysh Intro 8 Definitions Chehyshev Polynomials srum duv Hr l l ll ev Polynomial The Chebyshev Polynomials El El T3x Chehyshev Polynomials tSruas duv 4 a mu ghehyshev Polynomial Iner 8 Definitions The Chebyshev Polynomials 3 6 T2x E El T3x 9 6 T4x 0 5 0 7 Chehyshev Polynomials srum duv H il vn Chehyshev Polynomial Iner 8 Definitions The Chebyshev Polynomials 0 5 A A T5x Chehyshev Polynomials Least Squ any Pro perlies Orthogonality of the Chebyshev Polynomials 1 TnxTm 1 W Square redux rthogonality of the Chebyshev Polynomials l 1 mxmx 1 dX COS narccosx COS marccosx A f 7 X2 1 Reintroducing t9 arccosx gives and the integral becomes 0 7r 7 cosn0cosm0d00 cosn0cosm0d0 Chehyshev Polynomials amp Least Squares redux 7 1245 Chehyshev P I Least Sq Pro perlies Orthogonality of the Chebyshev Polynomials l 1 mxmx 1 dX COS narccosx COS marccosx A f 7 X2 1 Reintroducing t9 arccosx gives 17le and the integral becomes 0 7r 7 cosn0cosm0 d0 cosn0 cosm0 d0 7r 0 Now we use the fact that cosn l m0 l cosn 7 m0 t9 t9 cosn cosm 2 Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 1245 Chehyshev Polynomials Least Sqquot any Pro perlies Orthogonality of the Chebyshev Polynomials II We have quot cosn m0 cosn 7 m0 2 0 Square redux Chehyshev P I Least Sq Pro perlies Orthogonality of the Chebyshev Polynomials We have quot cosn m0 cosn 7 m0 d0 0 2 If m 7 n we get sinn m0 sin a m0 W a 0 2nm 2nim 07 7 Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 1345 Chebyshev Polynomials Le as quas39es m uv Orthogonality of the Chebyshev Polynomials II We have quot cosn l m0 l cosn 7 m0 0 2 If m 7 n we get sinn l m0 l l 2n m 2n7 m if m n we have msin n m0 7 Hence the Chebyshev polynomials are orthogonal Peter Blomgrel Chebyshev Polynomials 8 Least Squares redux 7 1345 Chehyshev Polynomials L9 Squ re ux Pro perlies Zeros and Extrema of Chebyshev Polynomials T h eore m The Chebyshev polynomial of degree n 2 1 has n simple zeros in 717 1 at 2k 7 1 Xkcoslt77rgt7 k1n 2n Moreover Tnx assumes its absolute extrema at k XLCOSlt7Wgt7 With TrX71k7 k1n71 Peter Blon ren redux 7 1445 Chehyshev Polynomials Least quns39es re ux 90gtvale Wd Emme a Polynomials Tm The Chebyshev polynomial of degree n 2 1 has n simple zeros in 711 at 2k71 XkCOSlt 7r7 k1n 2n Moreover Tnx assumes its absolute extrema at k XLCOSlt7Wgt7 With TrX71k7 k1n71 Payoff No matter what the degree of the polynomial the oscilla tions are kept under control Peter Blomgren Chehyshev Polynomials Least Squ qu Pro p ruies Zeros and Extrema of Chebyshev Polynomials Proof Let 2k 7 1 k7r Xk cos 7r Xk cos i V 2n n TnXk cosnarccosxk cos narccos cos 2221 cos 0 Then Square redux Chehyshev Polynomials Least Squ qu Pro p ruies Zeros and Extrema of Chebyshev Polynomials Proof Let 2k 7 1 k7r Xk cos 7r Xk cos i V 2n n TnXk cosnarccosxk cos narccos cos 2221 cos 0 Then Tux ircoqnmoqm Square redux Chehyshev Polynomials Least Squ qu Pro p ruies Zeros and Extrema of Chebyshev Polynomials Proof Let 2k 7 1 k7r Xk cos 7r Xk cos i V 2n n TnXk cosnarccosxk cos narccos cos 2221 cos 0 Then TAX cosn arccosx W W quot ffmfli f 0 w Ch ehy 39 Square redux Chehyshev Polynomials Lem Square rerlux Zeros and Extrema of Chebyshev Polynomials Proof Let 2k 7 1 k7r Xk cos 7r Xk cos i V 2n n Then TnXk cosnarccosxk cos narccos cos 2221 cos 0 d nsin n arccos TAX Ecosn arccosx 1 Lu X0 0 V TnXlt cos narccos cos cosk7r 713 Chehyshev Polyno 39 Chebyshev Polynomials Least Squares re ux llvlioxnnrg Cindyam yn ori mime mam glam l A monic polynomial is a polynomial with leading coefficient 1 We get the monic Chebyshev polynomials 7 X by dividing Tnx by 2W1 n 1 Hence Peter Blomgren Chehyshev Polynomials Least Sqquot any Monic Chebyshev Polynomials II The location of the zeros and extrema of 7 X coincides with those of Tnx however the extreme values are N 7 k Tnx lt 21171 k1iun71i Square redux Chehyshev Polynomials Least Squ any Pro perlies Monic Chebyshev Polynomials II The location of the zeros and extrema of 7 X coincides with those of Tnx however the extreme values are 739nx lt71k k1 nili 2n1 7 7 7 efiniio Let 75 denote the set of all monic polynomials of degree n Square redux Chebyshev Polynomials Least quns39es l dux v womanle The location of the zeros and extrema of l nx coincides with those of Tnx however the extreme values are N 7 k Tnx lt 21171 k1iun71i De nition Let 75 denote the set of all monic polynomials of degree n 4 Theorem lM mHMlax The monic Chebyshev polynomials 7700 have the property that l 1 N N 7 Tn lt Pn VP ni 2H X3331 l W 7 X331 l W X E 73 Moreover equality can only occur if Pnx E 7 X Peter Blomgren Chehyshev Polynomials tSruas duv y I Properties The Monic Chebyshev Polynomials Chehyshev Polynomials tSruas duv The Monic Chebyshev Polynomials Chehyshev Polynomials 39 wv Least Sqquot any The Monic Chebyshev Polynomials Pro perlies 1 T1x 3 6 T2x E El T3x 0 5 7 0 7 0 5 red ux Chehyshev Polynomials 39 wv Least Squ any The Monic Chebyshev Polynomials Pro perlies red ux Chehyshev Polynomials 39 wv Least Sqquot any The Monic Chebyshev Polynomials Pro perlies 0 5 H T5x Square red ux Chehyshev P I Least Sqi Pro perlies Optimal Node Placement in Lagrange Interpolation l If X0X17 Xr are distinct points in the interval 711 and f E Cr39i1717 1 and PX the nth degree interpolating Lagrange polynomial then VX 6 711 35X 6 711 so that n 1 X n W a PM gix ext Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 1945 Chehyshev P I Least Sqi Pro perlies Optimal Node Placement in Lagrange Interpolation l If X0X17 Xr are distinct points in the interval 711 and f E Cr39i1717 1 and PX the nth degree interpolating Lagrange polynomial then VX 6 711 35X 6 711 so that n 1 X n W a PM gix ext We have no control over f 1 x but we can place the nodes in a clever way as to minimize the maximum of HZ0X7XR Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 1945 Chehyshev Polynomials Leas Squ lux Optimal Node Placement in Lagrange Interpolation I If X0X17 Xr are distinct points in the interval 711 and f E Cr39i1717 1 and PX the nth degree interpolating Lagrange polynomial then VX 6 711 35X 6 711 so that n 1 X n W a PM gov ext We have no control over f 1 x but we can place the nodes in a clever way as to minimize the maximum of HZ0X7XR Since HZ0X7 Xk is a monic polynomial of degree n l 1 we know the min max is obtained when the nodes are chosen so that Igfx 7 Xk Tn1x ie Xk cos Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 1945 Chehyshev Polynomials Le Square redua Optimal Node Placement in Lagrange Interpolation ll If PX is the interpolating polynomial of degree at most n With nodes at the roots of Tn1x then lfX PXl 8X lf 1xl7 1 lt xenllffl 2quotn 1lxen11 w e C 1711 Peter Blomgren Chehyshev Polynomials e Least Squares redux 7 2u45 Chehyshev Polynomials Least quns39es re ux 1 2vgtw l9r mmmal lt iquf r PW I vL Erange Interpolation Thmzem 1 If PX is the interpolating polynomial of degree at most n With nodes at the roots of Tn1x then WX PX 8X V 1Xh 1 lt xenlLafl 2quotn 1lxen11 w e C 1711 111endirng to any initewaf The transformation N 1 x baxab transforms the nodes Xk in 711 into the corresponding nodes quotlt in a7 b Peter Blomgren 1 u Peter Blom Square redux Chehyshev Polynomials Least Sqquot any fX Taylor2x LagrangeZEquiSpacedx Square red ux fx Taylor agrang LagrangeZ 2X eZEquiSpac edx Optimalx Square red ux Chehyshev Polynomials quot wv Squas eduv ml Properties Example Interpolating fx 7 The Error Deviation Peter Blom Peter Blom lei hev F39u Least Squar Least Squares Revisited Introduction Before we move on to new and exciting orthogonal polynomials with exotic names Let39s take a moment or two and look at the usage of Least Squares Approximation This section is a how to with quite a few applied example of least squares approximation Square redux Chehyc hev F39u an39 east Squares redux Exam pies i Example 1 Warm up First we consider the problem of fitting 1st 2nd and 3rd degree polynomials to the following data x 10 11 13 15 19 21 y 184 190 231 265 274 318 matlabu First we define the matrices A1 onessizex x A2 A1 XJKX A3 A2 xxx Then we solve the Normal Equations pcoefl A1y pcoef2 A2y pcoefS A3y Note The matrices A1 A2 and A3 are tall and skinny Normally we would compute An An 1An y however when matlab encounters Any it automatically gives us a solution in the least squares sense 5351 Square red ux Chelxyshev F39ulylwn39 Examples in i i u east Squares redux Example 1 Warm up We now have the coefficients for the polynomials let39s plot matlabgtgt xv 1000121 p1 polyva1flipudpcoef1 XV p2 polyva1flipudpcoef2 XV 3 polyva1flipudpcoef3 XV p10txvp3 ki 1inewidth 3 hold on plot xy ko 1inewidth 3 hold off 11 Figure The least squares polynomials p1x pzx and p3x m Square redux Cluelnydhev p Examples L i i east Square Example 1 Warm up Finally we compute the error matlabgtgt plerr polyva1flipudpcoef1X 7 y p2err polyva1flipudpcoef2X y p3err polyva1flipudpcoef3X y dispsump1errp1err sumP2err pZeIr sum plierr pSerr Which gives us the fitting errors PP 00877 P 00699 P 00447 Chehyshev Polyno 39 Chelxyshev F39ulylwn39 Examples east Squares redux a Example 2 Something More Exotic Consider the same data x 10 11 13 15 19 21 y 184 190 231 265 274 318 But let39s find the best fit of the form a l b to this data Notice that this expression is linear in its parameters a7 b so we can solve the corresponding least squares problem matlabgtgt A onessizex sqrtx pcoef Ay xv 1000121 iv pcoef1 pcoef2sqrtxv p10txvfv ki 1inewidth 3 hold on p10txy ko 1inewidth 3 hold off Square redux om39 J I quares redux Example 2 Something More Exotic Figure The best t of the form a b i We compute the fitting error matlabgtgt ferr pcoef1 pcoet2sqrtx r y dispsumferrferr Which gives us nger 00749 W Chehyshev Polynomials amp Least Squares redux 7 2945 pa Ci L hev Least Square red Examples IX iV i Getting Even More Exotic As long as the model is linear in its parameters we can solve the least squares problem Non linear dependence will have to wait until Math 693a We can fit this model M1a7 b7 c7 d a l bx32 l c deSiquotX Just define the matrix matlabgtgt A onessizex XA32 1sqrtx expsinx and compute the coefficients matlabgtgt coef Ay etc Peter Blomgren Chehyshev Polynomials e Least Squares redux 7 an45 om39 quares redux Getting Even More Exotic Optimal quotExoticquot Model The optimal approximation of this form is 164133 7 0 9970x32 7 W 71 13325539quotltXgt 5351 Chehyshev Polynomials amp Least Squares redux 7 3145 lei hev F39u 39 L m Least Squar More than one variable 7 No rlohlem Getting Multi Dimensional It seems quite unlikely the model M1037 b7 C d a bx32 c desiquotX will ever be useful However we have forgotten about one important aspect of the problem so far our models have depended on only one variable x How do we go about fitting multi dimensional data Square redux Chehyu39hov F39u un39 4 east Squares redux More than one variable 7 No woblem Example Going 2D matlabgtgt x y XYmeshgridx y ny1sqrt XYA30 05randnsizex surf xy ny Square redux Ieh hev F39u 39 L 7 Least Squar More than one variable 7 No woblem Example Going 2D Lets try to fit a simple 3 parameter model to this data Mabc 3 bx cy matlabgtgt sz sizeX Bm reshape X prodsz 1 Cm reshape Y prodsz 1 Am onessizeBm RHS reshapenyprodsz 1 A Am Bm Cm coef A RHS fit coef1 coef2X coef3Y fitError ny fit surf xyfitEIror Square redux Cllelxyshev F39ulylwmizls east quares redux mm 7 m igi nmwmu E mmplke rg niiiiig lg Fit Model 1 ErrorModel1 Figure The optimal model fit and the fitting error for the least squares best fit in the model space Ma7 b7 6 a l bx l cy Here the total LSQ error is 42282 Peter Blomgren 91 hev F39u 39 L 7 Least Squar More than one variable 7 No rlohlem Example Going 2D Lets try to fit a simple 4 parameter bi linear model to this data matlabgtgt sz A Mabcabxcydxy sizeX reshape X prodsz 1 reshape Y prodsz 1 reshape X Y prodsz 1 onessize Bm reshapenyprodsz 1 Am Bm Cm Dm coef A RHS fit coef1 coef2x coef3Y coef4XY fitError ny fit surf xyfitError red ux Square Cllelxyshev F39ulylwmizls east quares redux ample m imig 2213 Fit Model 2 Figure The fitting error for the least squares best fit in the model space Ma7 b7 6 3 bx cy dxy Still a pretty bad fit Here the total LSQ error is still 42282 Peter Blomgren Chehyu39hev F39u an39 east Squares redux More than one variable 7 No rlohlem Example Going 2D Since the main problem is in the y direction we fit try a 4 parameter model with a quadratic term in y Mabca l bx l cy l dy2 matlabgtgt sz sizeX Bm reshape Xprodsz 1 Cm reshape Yprodsz 1 Dm reshapeY Yprodsz 1 Am onessizeBm RHS reshapenyprodsz 1 A Am Bm Cm Dm coef A RHS fit coef1 coef2X coef3Y coef4YY fitError ny fit surfxyfitEIror Square redux Chehyshev F39ulylwmials east quares redux Fit Model 3 Figure The fitting error for the least squares best fit in the model space Ma7 b7 6 a bx cy dyz We see a significant drop in the error one order of magnitude and the total LSQ error has dropped to 5788 Peter Blomgren Chelxyshev Polynomials 1 east quares redux More than one variable 7 No rlohlem Example Going 2D 7 of 10 We notice something interesting the addition of the xy term to the model did not produce a drop in the LSQ error However the y2 allowed us to capture a lot more of the action The change in the LSQ error as a function of an added term is one way to decide what is a useful addition to the model Why not add both the xy and y2 always xy y2 Both A 862 1073 1705 HATA 7422 11515 29066 Table Condition numbers for the A matrices and associated Normal Equations for the different models Peter Blomgren Chehyshev Polynomials e Least Squares redux 7 Au45 Chehyu39hov F39u L an All I Least Squares redux More than one variable 7 No mhlem Example Going 2D We fit a 5 parameter model with a quadratic term in y Mabc 3 bx cy dy2ey3 matlabgtgt sz sizeX Bm reshape X prodsz 1 Cm reshape Y prodsz 1 Dm reshapeY Yprodsz 1 Em reshapeY Y Yprodsz 1 Am onessizeBm RHS reshapenyprodsz 1 A Am Bm Cm Dm Em coef A RHS fit coef1 coef2x coef3Y coef4YY coef5YA3 fitError ny fit surfx fitEIror y 535 Square redux Chehyshev F39ulylwmials east Squares redux rial3w iamr m ti39ivi iii quot 7 M i nlmumu t mimigiie Gehrig 213 Fit Model A Error Model A Figure The fitting error for the least squares best fit in the model space Ma7 b7 6 a i bx i cy i dy2 i ey3 We now have a pretty good fit The LSQ error is now down to 09864 Peter Blomgrei Chelxyshev F39ulynomizls 1 Least Squares redux More than one variable 7 No rlohlem Example Going 2D 10 of 10 Model LSQ error ATA a bx cy 42282 278 a bx cy dxy 42282 7422 abx cy dy2 5788 11515 a bx cy ey3 2695 107204 a bx cy dy2 ey3 09864 1873124 Table Summary of LSQ error and conditioning of the Normal Equations for the various models We notice that additional columns in the A matrix additional model parameters have a severe effect on the conditioning of the Normal Equations Chehyshev Polynomials e Least Squares redux 7 4345 Peter Blomgrel Chelxyshev Pulwa east Squares red 21 L39 ux More than one variable 7 No rlohlem Moving to Even Higher Dimensions At this point we can state the Linear Least Squares fitting problem in any number of dimensions and we can use exotic models if we want to In 3D we need 10 parameters to fit a model with all linear and second order terms Mlaybic d7e7 fagahil39aj 3 bxcydzex2fyzgzz hxyixzjyz With nX ny and nZ data points in the X y and z directions respectively we end up with a matrix A of dimension nX ny n2 x 10 Peter Blomgren Chehyshev Polynomials e Least Squares redux 7 4445 is L39 x More than one variable 7 No rlohlem Chelxyshev Pulwa east Squares redu Ill conditioning of the Normal Equations Needless to say the normal equations can be quite ill conditioned in this case The ill conditioning can be eased by searching for a set of orthogonal functions with respect to the inner product Xb Yb Zb fX7yizgX7yizdxdydz Xe Va 2 a ltfX7gXgt That39s sometimes possible but we39ll leave the details as an exercise for a dark and stormy night Peter Blomgren Chehyshev Polynomials 8 Least Squares redux 7 4545

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.