### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# NUMERICAL ANALYSIS Math 105

UCI

GPA 3.73

### View Full Document

## 27

## 0

## Popular in Course

## Popular in Mathematics (M)

This 47 page Class Notes was uploaded by Adam Crona on Saturday September 12, 2015. The Class Notes belongs to Math 105 at University of California - Irvine taught by Staff in Fall. Since its upload, it has received 27 views. For similar materials see /class/201870/math-105-university-of-california-irvine in Mathematics (M) at University of California - Irvine.

## Reviews for NUMERICAL ANALYSIS

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/12/15

Jim Lambers Math 105A Summer Session I 200304 Lecture 16 Examples These examples correspond to Sections 48 and 49 in the text Example We will use the Composite Trapezoidal Rule with m n 2 to evaluate the double integral 12 12 5mi dy dx 0 0 The Composite Trapezoidal Rule with n 2 subintervals is IveMme aHgf agbfb7 hba39 lfa 0 and b 127 then 71 12 7 02 14 and this simpli es to 12 1 angmwammw vm We rst use this rule to evaluate the single integral 012 92 dz where This yields 12 12 12 5mi dy dm 9m dm 0 0 0 mm2umwwu 1 12 12 12 g eyio dy 2 527141 dy eyilZ dy 0 0 0 Now7 to evaluate each of these integrals7 we use the Composite Trapezoidal Rule in the y direction with m 2 If we let k denote the step size in the y direction7 we have k 12 7 02 147 and 22 therefore we have 12 12 ey m dy dm 0 0 22 12 12 12 5370 dy 2 527141 dy eyilZ dy 0 0 0 5070 25147051270 OOIH OOIH 22 2 50714 2514714 512441 g 50712 2514712 512712H z 502514612 5714 250 514 5712 25714 50 N 750 lieilzi Lea2 514 6461 z 0257911494889765 The exact value7 to 15 digits7 is 0255251930412762 The error is 266 x 10 37 which is to be expected due to the use of few sulointervals7 and the fact that the Composite Trapezoidal Rule is only second order accurate D Example We will use the Composite Simpson s Rule with n 2 and m 4 to evaluate the double integral 1 21 m2 1 ya dy dx 0 m In this case7 the domain of integration described by the limits is not a rectangle7 but a triangle de ned by the lines y z y 2x and z 1 The Composite Simpson s Rule with n 2 subintervals is b h b b macmglrltagt4fltbgtl h quot5 774 lfa 0 and b 17 then h 17 02 127 and this simpli es to 12 x dz 3 gm 4f12 m We rst use this rule to evaluate the single integral Olgm dz 2 where 21 WE 2y3dy I This yields 1 21 m2y3 dydm 0 m 1 9m dm k 90 4912 91l 1 0 1 1 2 2 02y3dy4 lt gt y3dy12y3dy 6 0 12 2 1 The rst integral will be zero since the limits of integration are equal To evaluate the second and third integrals we use the Composite Simpson s Rule in the y direction with m 4 If we let k denote the step size in the y direction we have k 2m 7 z4 z4 and therefore we have k 18 for the second integral and k 14 for the third This yields M2 lzwgdrf wsdyl 1ltggt3gt4ltilt gt3gt4142 gt3gtG13 1134lt1lt 3gt 2 1 4lt1 H23 z 103125 The exact value is 1 The error 3125 x 10 2 is rather large which is to be expected due to the poor distribution of nodes through the triangular domain of integration A better distribution is achieved if we use 71 4 and m 2 which yields the much more accurate approximation of 1001953125 E Z all 22 22 all HE g D g 1 12 Example Use the Composite Simpson s Rule with n 4 to evaluate the integral 1 m 5 0 Md This is an improper integral due to the singularity at z 0 To work around this problem we can rewrite the integral as 1 m 1 m 1 e 7 e 7pnx pnx 0 14dz70 14 dmO 1M dx 3 Jim Lambers Math 105A Summer Session I 200304 Lecture 1 Notes These notes correspond to Section 11 in the text What is Numerical Analysis In scienti c applications there are many situations in which mathematics is employed to answer questions that we may have about certain phenomena that we have observed In such situations a mathematical model is developed in order to describe this phenomena and from this model we can obtain a mathematical problem whose solution will hopefully help provide the answers that we seek In almost all cases these mathematical problems cannot be solved exactly Therefore it is necessary to design an algorithm that can yield an approximate solution and then implement this algorithm in order to compute this approximate solution Because the computed solution is only an approximation it is also necessary to analyze our algorithm in order to determine whether the approximation is sufficiently accurate If it can be determined that the computed solution is sufficiently accurate then it can be interpreted in order to provide answers to our original questions about the phenomena that we are modeling The process of designing implementing and analyzing an algorithm for computing an approxi mate solution to a mathematical problem arising from a scienti c application is known as scienti c computing In a broader context the discipline of computing approximate solutions of mathematical problems regardless of the applications from which they arise is known as numerical analysis Review of Calculus Among the mathematical problems that can be solved using techniques from numerical analysis are the basic problems of differential and integral calculus o computing the instantaneous rate of change of one quantity with respect to another which is a derivative and o computing the total change in a function over some portion of its domain which is a de nite integral Calculus also plays an essential role in the development and analysis of techniques used in numerical analysis including those techniques that are applied to problems not arising directly from calculus Therefore it is appropriate to review some basic concepts from calculus before we begin our study of numerical analysis Limits The basic problems of differential and integral calculus described in the previous paragraph can be solved by computing a sequence of approximations to the desired quantity and then determining what value if any the sequence of approximations approaches This value is called a limit of the sequence As a sequence is a function we begin by de ning precisely the concept of the limit of a function De nition We write 13 1 L if for any open interval 1 containing L there is some open interval 2 containing a such that x 6 1 whenever x E 2 and x 7 a We say that L is the limit of x as x approaches a We write limi x L if for any open interval 1 containing L there is an open interval 2 of the form c a where c lt a such that x 6 1 whenever x E 2 We say that L is the limit of x as x approaches a from the left or the lefthand limit of x as x approaches a Similarly we write lim x L xaat if for any open interval 1 containing L there is an open interval 2 of the form a c where c gt a such that x 6 1 whenever x E 2 We say that L is the limit of x as x approaches a from the right or the righthand limit of x as x approaches a We can make the de nition of a limit a little more concrete by imposing sizes on the intervals 1 and 2 as long as the interval 1 can still be of arbitrary size It can be shown that the following de nition is equivalent to the previous one De nition We write lim x L xaa if for any 6 gt 0 there exists a number 6 gt 0 such that 7 L lt 6 whenever lx 7 a lt 6 and x 7 a Similar de nitions can be given for the left hand and right hand limits Note that in either de nition the point x a is speci cally excluded from consideration when requiring that x be close to L whenever x is close to a This is because the concept of a limit is only intended to describe the behavior of x near x a as opposed to its behavior at x a Later in this lecture we discuss the case where the two distinct behaviors coincide Limits at In nity The concept of a limit de ned above is useful for describing the behavior of a function fz as x approaches a nite value a However suppose that the function f is a sequence which is a function that maps N the set of natural numbers to R the set of real numbers We will denote such a sequence by fnf0 or simply ln numerical analysis it is sometimes necessary to determine the value that the terms of a sequence fn approach as n a 00 Such a value if it exists is not a limit as de ned previously However it is natural to use the notation of limits to describe this behavior of a function We therefore de ne what it means for a sequence fn to have a limit as it becomes in nite De nition Limit at In nity Let be a sequence de ned for all integers not less than some integer no We say that the limit of fn as it approaches 00 is equal to L and write lim fn L naoo if for any open interual I containing L there exists a number M such that fn E I wheneuer m gt M Continuity In many cases the limit of a function fz as x approached a could be obtained by simply computing fa lntuitively this indicates that f has to have a graph that is one continuous curve because any break or jump in the graph at z a is caused by f approaching one value as x approaches a only to actually assume a different value at a This leads to the following precise de nition of what it means for a function to be continuous at a given point De nition Continuity We say that a function f is continuous at a if gnu W M We also say that fz has the Direct Subsitution Property at z a We say that a function f is continuous from the right at a if hm rm fa maa Similarly we say that f is continuous from the left at a if hm W u maa The preceding de nition describes continuity at a single point In describing where a function is continuous the concept of continuity over an interval is useful so we de ne this concept as well De nition Continuity on an Interual We say that a function f is continuous on the interval a b if f is continuous at euery point in a 1 Similarly we say that f is continuous on 1 a b if f is continuous on a b and continuous from the right at a 2 a b if f is continuous on a b and continuous from the left at b 3 a b is continuous on a b continuous from the right at a and continuous from the left at b In numerical analysis it is often necessary to construct a continuous function such as a polyno mial based on data obtained by measurements and problem dependent constraints In this course we will learn some of the most basic techniques for constructing such continuous functions by a process called inteijoolation The Intermediate Value Theorem Suppose that a function f is continuous on some closed interval a b The graph of such a function is a continuous curve connecting the points a a with b b If one were to draw such a graph their pen would not leave the paper in the process and therefore it would be impossible to avoid any y value between a and b This leads to the following statement about such continuous functions Theorem Intermediate Value Theorem Let f be continuous on ab Then on a b 1 assumes euery ualue between a and b that is for any ualue y between a and b y for some c in a b The lntermediate Value Theorem has a very important application in the problem of nding solutions of a general equation of the form x 0 where x is the solution we wish to compute and f is a given continuous function Often methods for solving such an equation try to identify an interval a b where a gt 0 and b lt 0 or vice versa In either case the Intermediate Value Theorem states that 1 must assume every value between a and b and since 0 is one such value it follows that the equation x 0 must have a solution somewhere in the interval a b We can nd an approximation to this solution using a procedure called bisection which re peatedly applies the Intermediate Value Theorem to smaller and smaller intervals that contain the solution We will study bisection and other methods for solving the equation x 0 in this course Derivatives The basic problem of differential calculus is computing the instantaneous rate of change of one quantity y with respect to another quantity x For example y may represent the position of an object and x may represent time in which case the instantaneous rate of change of y with respect to x is interpreted as the velocity of the object When the two quantities x and y are related by an equation of the form y x it is certainly convenient to describe the rate of change of y with respect to x in terms of the function 1 Because the instantaneous rate of change is so commonplace it is practical to assign a concise name and notation to it which we do now De nition Deiiuatiue The derivative of a function x at x a denoted by f a is m whf a 7 provided that the aboue limit exists When this limit exists we say that f is differentiable at a Remark Given a function x that is differentiable at x a the following numbers are all equal 0 the derivative of f at x a f a o the slope of the tangent line of f at the point a a and o the instantaneous rate of change of y x with respect to x at x a This can be seen from the fact that all three numbers are de ned in the same way D Many functions can be differentiated using differentiation rules such as those learned in a cal culus course However many functions cannot be differentiated using these rules For example we may need to compute the instantaneous rate of change of a quantity y x with respect to another quantity x where our only knowledge of the function 1 that relates x and y is a set of pairs of x values and y values that may be obtained using measurements In this course we will learn how to approximate the derivative of such a function using this limited information The most common methods involve constructing a continuous function such as a polynomial based on the given data using interpolation The polynomial can then be differentiated using differentiation rules Since the polynomial is an approximation to the function x its derivative is an approximation to f x Differentiability and Continuity Consider a tangent line of a function f at a point a a When we consider that this tangent line is the limit of secant lines that can cross the graph of f at points on either side of a it seems impossible that f can fail to be continuous at a The following result con rms this a function that is differentiable at a given point and therefore has a tangent line at that point must also be continuous at that point Theorem If f is di erentiable at a then f is continuous at a It is important to keep in mind however that the conuerse of the above statement if f is continuous then f is di erentiable is not true It is actually very easy to nd examples of functions that are continuous at a point but fail to be differentiable at that point As an extreme example it is known that there is a function that is continuous everywhere but is differentiable nowhere Riemann Sums and the De nite Integral There are many cases in which some quantity is de ned to be the product of two other quantities For example a rectangle of width w has uniform height h and the area A of the rectangle is given by the formula A wh Unfortunately in many applications we cannot necessarily assume that certain quantities such as height are constant and therefore formulas such as A wh cannot be used directly However they can be used indirectly to solve more general problems by employing the notation known as integral calculus Suppose we wish to compute the area of a shape that is not a rectangle To simplify the discussion we assume that the shape is bounded by the vertical lines x a and x b the x axis and the curve de ned by some continuous function y x where x 2 0 for a g x g I Then we can approximate this shape by n rectangles that have width Ax b 7 an and height x where xi a iAx for i O n We obtain the approximation V L A z An Z i1 lntuitively we can conclude that as n a 00 the approximate area An will converge to the exact area of the given region This can be seen by observing that as n increases the n rectangles de ned above comprise a more accurate approximation of the region More generally suppose that for each n 1 2 we de ne the quantity Rn by choosing points a x0 lt x1 lt lt xn b and computing the sum 71 Rn Z fAi7 AM 90 M717 90H S S 9 i1 The sum that de nes Rn is known as a Riemann sum Note that the interval ab need not be divided into subintervals of equal width and that x can be evaluated at arbitrary points belonging to each subinterval lf x 2 0 on ab then Rn converges to the area under the curve y x as n a 00 provided that the widths of all of the subintervals xi1x for i 1 n approach zero This behavior is ensured if we require that lim 6n 0 where 6n max Ax naoo 1SlS This condition is necessary because if it does not hold then as n a 00 the region formed by the n rectangles will not converge to the region whose area we wish to compute in fact it may not converge at all If 1 assumes negative values on a b then under the same conditions on the widths of the subintervals Rn converges to the net area between the graph of f and the x axis where area below the x axis is counted negatively We de ne the de nite integral of x from a to b by b olx lim Rn a mace where the sequence of Riemann sums Rn il is de ned so that 6n a 0 as n a 00 as in the previous discussion The function x is called the integrand and the values a and b are the lower and upper limits of integration respectively The process of computing an integral is called integration In this course we will study the problem of computing an approximation to the de nite integral of a given function x over an interval a b We will learn a number of techniques for computing such an approximation and all of these techniques involve the computation of an appropriate Riemann sum Extreme Values In many applications it is necessary to determine where a given function attains its minimum or maximum value For example a business wishes to maximize pro t so it can construct a function that relates its pro t to variables such as payroll or maintenance costs We now consider the basic problem of nding a maximum or minimum value of a general function x that depends on a single independent variable x First we must precisely de ne what it means for a function to haue a maximum or minimum value De nition Absolute extrema A function f has a absolute maximum or global maximum at c if Z for all x in the domain of The number is called the maximum value off on its domain Similarly f has a absolute minimum or global minimum at c if c x for all x in the domain of The number is then called the minimum value of f on its domain The maximum and minimum ualues of f are called the extreme values of f and the absolute maximum and minimum are each called an extremum of Before computing the maximum or minimum value of a function it is natural to ask whether it is possible to determine in advance whether a function even has a maximum or minimum so that effort is not wasted in trying to solve a problem that has no solution The following result is very helpful in answering this question Theorem Extreme Value Theorem If f is continuous on 0 b then f has an absolute maximum and an absolute minimum on 0 b Now that we can easily determine whether a function has a maximum or minimum on a closed interval ab we can develop an method for actually nding them It turns out that it is easier to nd points at which 1 attains a maximum or minimum value in a local sense rather than a global sense In other words we can best nd the absolute maximum or minimum of f by nding points at which 1 achieves a maximum or minimum with respect to nearby points and then determine which of these points is the absolute maximum or minimum The following de nition makes this notion precise De nition Local extrema A function f has a local maximum atc if fc 2 fx for allx in an open interual containing c Similarly f has a local minimum at c if g for all x in an open interual containing c A local maximum or local minimum is also called a local extremum At each point at which 1 has a local maximum the function either has a horizontal tangent line or no tangent line due to not being differentiable It turns out that this is true in general and a similar statement applies to local minima To state the formal result we rst introduce the following de nition which will also be useful when describing a method for nding local extrema De nitionC ritical Number A number c in the domain of a function f is a critical number of f if fc 0 or fc does not exist The following result describes the relationship between critical numbers and local extrema Theorem Fermat s Theorem ff has a local minimum or local maximum at c then c is a critical number of f that is either fc 0 or fc does not exist This theorem suggests that the maximum or minimum value of a function fx can be found by solving the equation f x 0 As mentioned previously we will be learning techniques for solving such equations in this course These techniques play an essential role in the solution of problems in which one must compute the maximum or minimum value of a function subject to constraints on its variables Such problems are called optimization problems Although we will not discuss optimization problems in this course we will learn about some of the building blocks of methods for solving these very important problems The Mean Value Theorem While the derivative describes the behavior of a function at a point we often need to understand how the derivative in uences a function s behavior on an interval This understanding is essential in numerical analysis because it is often necessary to approximate a function fx by a function gx using knowledge of fx and its derivatives at various points It is therefore natural to ask how well gx approximates fx away from these points The following result a consequence of Fermat s Theorem gives limited insight into the rela tionship between the behavior of a function on an interval and the value of its derivative at a point Theorem Rolle s Theorem If f is continuous on a closed interual ab and is di erentiable on the open interual a b and if fa fb then fc 0 for some number 0 in a b By applying Rolle s Theorem to a function 1 then to its derivative 1quot its second derivative 1 and so on we obtain the following more general result which will be useful in analyzing the accuracy of methods for approximating functions by polynomials Theorem Generalized Rolle s Theorem Let x0x1x2xn be distinct points in an interval ab If f is n times di erentiable on a b and if 0 fori 012 n then f c 0 for some number 0 in a b A more fundamental consequence of Rolle s Theorem is the Mean Value Theorem itself which we now state Theorem Mean Value Theorem If f is continuous on a closed interval a b and is di erentiable on the open interval ab then N 7 1 i w 7 f C for some number 0 in a b Remark The expression N9 7 1 b 7 a is the slope of the secant line passing through the points a a and b b The Mean Value Theorem therefore states that under the given assumptions the slope of this secant line is equal to the slope of the tangent line of f at the point e c where c E a b D The Mean Value Theorem has the following practical interpretation the average rate of change of y x with respect to x on an interval a b is equal to the instantaneous rate of change y with respect to x at some point in a b The Mean Value Theorem for Integrals Suppose that x is a continuous function on an interval a 1 Then by the Fundamental Theorem of Calculus x has an antiderivative de ned on ab such that F x If we apply the Mean Value Theorem to Fx we obtain the following relationship between the integral of 1 over a b and the value of f at a point in a 1 Theorem Mean Value Theorem for Integrals If f is continuous on ab then b mm WW 7 a 04 for some 0 in a b In other words 1 assumes its average value over a 1 de ned by 1 b fave ma d7 at some point in ab just as the Mean Value Theorem states that the derivative of a function assumes its average value over an interval at some point in the interval The Mean Value Theorem for lntegrals is also a special case of the following more general result Theorem Weighted Mean Value Theorem for Integrals If f is continuous on ab and g is a function that is integrable on ab and does not change sign on a b then b b fltzgtgltzgtdzfltcgt gltzgtdz I I for some 3 in a b In the case where 9a is a function that is easy to antidifferentiate and x is not this theorem can be used to obtain an estimate of the integral of fzgz over an interval Taylorls Theorem In many cases it is useful to approximate a given function x by a polynomial because one can work much more easily with polynomials than with other types of functions As such it is necessary to have some insight into the accuracy of such an approximation The following theorem which is a consequence of the Weighted Mean Value Theorem for lntegrals provides this insight Theorem Taylor s Theorem Let f be n times continuously di erentiable on an interual ab and suppose that fm exists on a b Let m0 6 ab Then for any point m E a b 1 FWD Rn7 where n 739 m Pnltwgt 2 f l 0W or 10 39 n m mo mom 7 ago grow 7 mo fn th 7 mo and f 1 1 1 Fmz 7 mo where is between 0 and m Jim Lambers Math 105A Summer Session I 200304 Lecture 11 Examples These examples correspond to Section 41 in the text Example Consider the function 2 xmgim f 7 COS 12512 i sin lt il gt wz21 Our goal is to compute f 025 Differentiating using the Quotient Rule and the Chain Rule we obtain os 2sin cos 2 A4 wz21 sinm1 cosmiz cosmiz 24121ltCosmimgt c mix Hm i il Sm sin cos A 1 7 iml coswiw 1121 2 wm21 z21372 A 2 1 39 Sm Evaluating this monstrous function at z 025 yields f 025 79066698770 An alternative approach is to use a centered di erence approximation m h 7 M7 h 2h 39 He 3 Using this formula with z 025 and h 0005 we obtain the approximation f0255 7 f0245 H025 g 001 79067464295 which has absolute error 77 x 1041 While this complicated function must be evaluated twice to obtain this approximation that is still much less work than using differentiation rules to compute f z and then evaluating f z which is much more complicated than D Example We will construct a formula for approximating f m at a given point mg by interpolating x at the points 0 0 h and 0 271 using a second degree polynomial p2m and then approximating f m0 by p2x0 Since p2z should be a good approximation of x near me especially when h is small its derivative should be a good approximation to f m near this point Using Lagrange interpolation we obtain 19290 032000 f900 hEz190 f900 270322007 Jim Lambers Math 105A Summer Session I 200304 Lecture 4 Notes These notes correspond to Sections 21 and 22 in the text Nonlinear Equations To this point we have only considered the solution of linear equations We now explore the much more dif cult problem of solving nonlinear equations of the form fX 07 where fx R a R can be any known function A solution x of such a nonlinear equation is called a root of the equation as well as a zero of the function 1 Existence and Uniqueness For simplicity we assume that the function f R a R is continuous on the domain under consideration Then each equation fix O i 1 m de nes a hypersurface in R The solution of fx 0 is the intersection of these hypersurfaces if the intersection is not empty It is not hard to see that there can be a unique solution in nitely many solutions or no solution at all For a general equation fx 0 it is not possible to characterize the conditions under which a solution exists or is unique However in some situations it is possible to determine existence analytically For example in one dimension the Intermediate Value Theorem implies that if a continuous function x satis es fa 0 and fb 2 0 where a lt b then x 0 for some m 6 11 Similarly it can be concluded that x 0 for some z E a b if the function m 7 2 0 for z a and z b where z 6 0119 This condition can be generalized to higher dimensions If S C R is an open bounded set and x7 zTfx 2 0 for all x on the boundary of S and for some 2 E S then fx 0 for some x E S Unfortunately checking this condition can be dif cult in practice One useful result from calculus that can be used to establish existence and in some sense uniqueness of a solution is the Inverse Function Theorem which states that if the Jacobian of f is nonsingular at a point x0 then f is invertible near x0 and the equation fx y has a unique solution for all y near fx0 If the Jacobian of f at a point x0 is singular then f is said to be degenerate at x0 Suppose that x0 is a solution of fx 0 Then in one dimension degeneracy means f z0 O and we say that 0 is a double root of m Similarly if f7z0 0 for j 0 m 71 then me is a root of multiplicity m We will see that degeneracy can cause difficulties when trying to solve nonlinear equations Sensitivity Recall that the absolute condition number of a function x is approximated by lf x ln solving a nonlinear equation in one dimension we are trying to solve an inverse problem where the forward problem is the evaluation of f at x 0 It follows that the condition number for solving x 0 is approximately 1lf x0l where x0 is the solution This discussion can be generalized to higher dimensions where the condition number is measured using the norm of the Jacobian The Bisection Method Suppose that x is a continuous function that changes sign on the interval 11 Then by the Intermediate Value Theorem x 0 for some x 6 11 How can we nd the solution knowing that it lies in this interval The method of bisection attempts to reduce the size of the interval in which a solution is known to exist Suppose that we evaluate m where m a b2 lf m 0 then we are done Otherwise 1 must change sign on the interval mm or mb since a and b have different signs Therefore we can cut the size of our search space in half and continue this process until the interval of interest is sufficiently small in which case we must be close to a solution The following algorithm implements this approach Algorithm Bisection Let f be a continuous function on the interval 11 that changes sign on 0119 The following algorithm computes an approximation 19 to a number p in a 1 such that 1 09 if pj 0 or b 7 a is sufficiently small then 29 Pj return 19 end if a pj lt 0 then b 29739 else a pj end At the beginning it is known that a 1 contains a solution During each iteration this algo rithm updates the interval a b by checking whether 1 changes sign in the rst half apj or in the second half pi 1 Once the correct half is found the interval ab is set equal to that half Therefore at the beginning of each iteration it is known that the current interval a 1 contains a solution The test fafpj lt 0 is used to determine whether 1 changes sign in the interval apj or pi b This test is more ef cient than checking whether fa is positive and fpj is negative or vice versa since we do not care which value is positive and which is negative We only care whether they have different signs and if they do then their product must be negative In comparison to other methods including some that we will discuss bisection tends to converge rather slowly but it is also guaranteed to converge These qualities can be seen in the following result concerning the accuracy of bisection Theorem Let f be continuous on ab and assume that fafb lt 0 For each positiue integer n let p be the nth iterate that is produced by the bisection algorithm Then the sequence pn391 conuerges to a numberp in a b such that 0 and each iterate pn satis es 1 a lpn 7 Pl g 2 It should be noted that because the nth iterate can lie anywhere within the interval a b that is used during the nth iteration it is possible that the error bound given by this theorem may be quite conservative Fixedpoint Iteration A nonlinear equation of the form x 0 can be rewritten to obtain an equation of the form 990 z in which case the solution is a xed point of the function g This formulation of the original problem x 0 will leads to a simple solution method known as xed point iteration Before we describe this method however we must rst discuss the questions of existence and uniqueness of a solution to the modi ed problem gz m The following result answers these questions Theorem Letg be a continuous function on the interual ab If 9m 6 ab for each m 6 ab then 9 has a xed point in ab Furthermore ifg is di erentiable on a b and there exists a constant k lt 1 such that lgW S k7 90 6 WW then 9 has exactly one xed point in 0 b Jim Lambers Math 105A Summer Session I 200304 Lecture 7 Examples These examples correspond to Sections 25 and 26 in the text Example We wish to nd the unique xed point of the function x cosz on the interval 0 1 If we use Fixed point Iteration with 0 05 then we obtain the following iterates from the formula zk 9zk coszk All iterates are rounded to ve decimal places m1 087758 2 063901 3 080269 4 069478 5 076820 6 071917 These iterates show little sign of converging as they are oscillating around the xed point If instead we use Fixed point Iteration with acceleration by Aitken s A2 method we obtain a new sequence of iterates ik where A A9002 mk mk 7 A2mk 35k 7 n1 90102 k2 2k1 10 for k 0 1 2 The rst few iterates of this sequence are 320 073139 321 073609 322 073765 323 073847 324 073880 Clearly these iterates are converging much more rapidly than Fixed point Iteration as they are not oscillating around the xed point but convergence is still linear Finally we try Steffensen s Method We begin with the rst three iterates of Fixed point Iteration 30 m0 05 10 m1 087758 20 m2 063901 Then we use the formula from Aitken s A2 Method to compute 0 0 2 lt1 lt0 71 7 950 7 m 7 m 7 7 073139 0 0 my 7 2m 80 We use this value to restart Fixed point Iteration and compute two iterates which are as c0993 074425 99gt c0504 073560 Repeating this process we apply the formula from Aitken s A2 Method to the iterates 31 51 and 9 to obtain 1 1 04 gt 4 34gt 2 32 31 0739076 mg 7 2m 81 Restarting Fixed point lteration with 32 as the initial iterate we obtain as cosz32 0739091 95gt cosz12 0739081 The most recent iterate mg is correct to ve decimal places Using all three methods to compute the xed point to ten decimal digits of accuracy we nd that Fixed point lteration requires 57 iterations so 58 must be computed Aitken s A2 Method requires us to compute 25 iterates of the modi ed sequence ik which in turn requires 27 iterates of the sequence zk where the rst iterate 0 is given Steffensen s Method requires us to compute my which means that only 11 iterates need to be computed D Example We apply Newton s Method to nd the roots of the polynomial 1035 3amp3 7 92 26x 7 24 Choosing the initial iterate mo 1 we apply Horner s Method to compute n and f z0 so that we can perform the step of Newton s Method to compute m1 We have b3 1 b2 79118 b1 2617818 b0 724 118 76 We conclude that fz0 76 and where the coefficients of qx were the values 3 b2 and b1 computed by Horner s Method so we have qx x2 7 8x 18 To compute f x0 we use the fact that f x0 qx0 and apply Horner s Method to qx to obtain 02 1 01 78 11 77 co 18 17711 We conclude that f x0 f 1 q1 11 Therefore the next iterate produced by Newton s Method is 7 900 1 7 0 fmo 7 1 0 WW 76 1 7 11 15455 We continue this process with x1 using Horner s Method to compute x1 and f x1 so that we can use Newton s Method to produce x2 and so on until the iterates produced by Newton s Method converge to the root x 2 Now we perform deflation so that we can compute the remaining roots of Applying Horner s Method to x to compute 2 O we obtain b3 1 b2 7972177 b1 2627712 b0 7242120 It follows that x has the factorization x x 7 2m 7 7x 12 The quadratic polynomial x2 7 7x 12 can easily be factored into x 7 3x 7 4 so we conclude that the roots of x are 2 3 and 4 D Example We apply Muller s Method to the polynomial mg 16354 7 40x3 5x2 2025 6 Figure 1 Miiller s Method applied to the polynomial x 16x4 7 40x3 1 5x2 20x 1 6 The solid curve is the graph of m and the dotted curves denote the quadratic polynomials generated by the method to approximate x near the chosen starting iterates 0 1 and 2 The left circle on the m axis denotes the root z 12417 and the right circle denotes the root z 19704 Jim Lambers Math 105A Summer Session I 200304 Lecture 9 Notes These notes correspond to Sections 32 and 33 in the text Divided Differences In the previous lecture we learned how to compute the value of an interpolating polynomial at a given point using Neville s Method However the theorem that serves as the basis for Neville s Method can easily be used to compute the interpolating polynomial itself The basic idea is to represent interpolating polynomials using the Newton form discussed in the Lecture 7 notes with the interpolation points used as the centers Recall that if the interpolation points do mn are distinct then the process of nding a polynomial that passes through the points mpg1 i 0 n is equivalent to solving a system of linear equations AX b that has a unique solution The matrix A is determined by the choice of basis for the space of polynomials of degree n or less Each entry 01 of A is the value of the jth polynomial in the basis at the point m In Newton intemoldtz39on the matrix A is upper triangular and the basis functions are de ned to be the set Njx0 where N0z1 NM Hm7xk j The advantage of Newton interpolation is that the interpolating polynomial is easily updated as interpolation points are added since the basis functions j O n do not change from the addition of the new points The coef cients 0 of the Newton interpolating polynomial n 1mm EtaVale j0 are given by 0739 imp795d where zo mj denotes the divided di erence of mo zj The divided difference is de ned as follows flan ya yi1 7 yi fllvl1l 1 7 My mw hmw k fli17quot397ikl flmhnwmi vill39 ik 90139 This de nition implies that for each nonnegative integer j the divided difference zo m1 zj only depends on the interpolation points mom1 zj and the value of x at these points It follows that the addition of new interpolation points does not change the coef cients co 0 Speci cally we have yn1 7 Pn n1 We 7 W NW W1 Nn190 This ease of updating makes Newton interpolation the most commonly used method of obtaining the interpolating polynomial The following result shows how the Newton interpolating polynomial bears a resemblance to a Taylor polynomial Theorem Let f be n times continuously di erentiable on ab and let m0x1 xn be distinct points in ab Then there exists a number 5 6 ab such that n fl900790177l Computing the Newton Interpolating Polynomial We now describe in detail how to compute the coef cients cj fz0m1mj of the Newton interpolating polynomial pnx and how to evaluate pnm ef ciently using these coef cients The computation of the coef cients proceeds by lling in the entries of a diuided di erence table This is a triangular table consisting of n 1 columns where n is the degree of the interpolating polynomial to be computed For j O 1 n the jth column contains n ij entries which are the divided differences fmkzk1 mkH for k 01 n 7 j We construct this table by lling in the n 1 entries in column 0 which are the trivial divided differences zj y for j 01 n Then we use the recursive de nition of the divided differences to ll in the entries of subsequent columns Once the construction of the table is complete we can obtain the coef cients of the Newton interpolating polynomial from the rst entry in each column which is zmzl mj for j O 1 n In a practical implementation of this algorithm we do not need to store the entire table because we only need the rst entry in each column Because each column has one fewer entry than the previous column we can overwrite all of the other entries that we do not need The following algorithm implements this idea Algorithm Divided Difference Table Given n distinct interpolation points m0z1zn the values of a function x at these points the following algorithm computes the coef cients cj zo m1 mj of the Newton interpolating polynomial fori01ndo di HM end fori12n do forjnn71i do di d1 dj71j 90H end end Once the coefficients have been computed we can use nested multiplication as discussed in the Lecture 7 notes to evaluate the resulting interpolating polynomial which is represented using the Newton diuided di erence formula 71 pnlt gt Z CjMj j0 n 771 7 Z fl900795177907lH 7 xi 70 i0 flmol fl071l 7 950 fl07172l 7 90095 7 901 zo 1 7 x0m 7 m1 x 7 mn1 Algorithm Nested Multiplication Given n distinct interpolation points m0z1zn and the coefficients 0 zo m1 mi of the Newton interpolating polynomial pnx the following algo rithm computes y pnx for a given real number m yCn forin71n720do yCi iy end It can be seen that this algorithm closely resembles Horner s Method which as discussed previously is a special case of nested multiplication that works with the power form of a polynomial whereas nested multiplication works with the more general Newton form Interpolation Using Equally Spaced Points Suppose that the interpolation points m0m1 Man are equally spaced that is z 0 ih for some positive number h In this case the Newton interpolating polynomial can be simpli ed since the denominators of all of the divided differences can be expressed in terms of the spacing It If we recall the forward dz39 erenoe operator A de ned by AM n1 9 where is any sequence7 then the divided differences zo m1 Wm are given by 1 k zo 1 mk WA fx0 The interpolating polynomial can then be described by the Newton forward dz39 erenoe formula W flmol i Mmo k1 where the new variable 3 is related to x by 7 0 8 h 7 and the extended binomial coe olent lt I gt is de ned by k kl lt3 gt 7 3371372sik1 where k is a nonnegative integer If we de ne the backward dz39 erence operator V by mG 90k M717 for any sequence wk then we obtain the Newton backward dz39 erence formula W 8 pm Hm 41 k kamx k1 where s m 7 znh7 and the preceding de nition of the extended binomial coe icient applies Osculatory Interpolation Suppose that the interpolation points are perturbed so that two neighboring points ml and n1 0 g l lt n7 approach each other What happens to the interpolating polynomial In the limit7 as n1 a m the interpolating polynomial pnx not only satis es yi but also the condition yl1 7 lim i14mi i1 7 mi 4 Jim Lambers Math 105A Summer Session I 200304 Lecture 6 Notes These notes correspond to Sections 24 and 25 in the text Error Analysis for Iterative Methods In general nonlinear equations cannot be solved in a nite sequence of steps As linear equations can be solved using direct methods such as Gaussian elimination nonlinear equations usually require iterative methods In iterative methods an approximate solution is re ned with each iteration until it is determined to be suf ciently accurate at which time the iteration terminates Since it is desirable for iterative methods to converge to the solution as rapidly as possible it is necessary to be able to measure the speed with which an iterative method converges To that end we assume that an iterative method generates a sequence of iterates x0 x1 X2 that converges to the exact solution x Ideally we would like the error in a given iterate xk to be much smaller than the error in the previous iterate xk For example if the error is raised to a power greater than 1 from iteration to iteration then because the error is typically less than 1 it will approach zero very rapidly This leads to the following de nition De nition Rate of Convergence Let Xkz0 be a sequence in R that conuerges to X E R and assume that xk 7 x for each h We say that the rate of convergence of xk to x is of order r with asymptotic error constant C if 7 hmHXk1 XH kaoo xkixtw C 7 whererZ l andCgt0 If r 1 we say that convergence is linear If r 2 then the method converges quadratically and if r 3 we say it converges cubically and so on Of course we cannot use the error ek xk 7 x to determine the rate of convergence exper imentally because we do not know the solution x Instead we can use the relative change in successive iterations but it is advisable to also compute after each iteration to ensure that the iterates are actually converging to a solution of fx 0 In the remainder of this lecture we will focus on the case where f is a scalar valued function of a single variable We will examine the various methods we have discussed for solving x 0 and attempt to determine their rate of convergence analytically In this case we can work with the error ek zk 7 f where zk is the kth iterate that is computed using the given method and f is the exact solution to which the sequence of iterates converges Bisect ion For this method it is easier to determine the rate of convergence if we use a different measure of the error in each iterate mk Since each iterate is contained within an interval ak1k where bk 7 1k 2411 7 1 with 11 being the original interval it follows that we can bound the error zk 7 f by ck bk 7 1k Using this measure we can easily conclude that bisection converges linearly with asymptotic error constant 12 Fixedpoint Iteration Suppose that we are using Fixed point lteration to solve the equation gz m where g is con tinuously differentiable on an interval 11 Starting with the formula for computing iterates in Fixed point lteration n1 99011 we can use the Mean Value Theorem to obtain 5k1 k1 if 9k 9 95kk 95k15k7 where 5k lies between zk and f It follows that ifg maps 11 into itself and g m k lt 1 on 11 for some constant k then for any initial iterate mo 6 11 Fixed point lteration converges linearly with asymptotic error constant g z since by the de nition of 5k and the continuity of g 6k17 i 7 k113i Bk 7klggolg kl7lgl Recall that the conditions we have stated for linear convergence are nearly identical to the conditions for g to have a unique xed point in 11 The only difference is that now we also require 9 to be continuous on 11 Now suppose that in addition to the previous conditions on 9 we assume that g z O and that g is twice continuously differentiable on 11 Then using Taylor s Theorem we obtain 6k1 9W 7 9M 9mzk f 9 Ekxk 7 z2 1 59 5k5i7 where 5k lies between zk and f It follows that for any initial iterate mo 6 ab Fixed point lteration converges at least quadratically with asymptotic error constant g z2 This discussion implies the following general result Theorem Let 9a be a function that is n times continuously di erentiable on an interual ab Furthermore assume thatgx E a 1 form 6 a b and that gx k on a b for some constant k lt 1 If the unique xed point f in ab satis es gm g m 9n 1 07 then for any m0 6 ab Fixed point Iteration conuerges to m of order n with asymptotic error constant g xnl Newt0n7s Method Using the same approach as with Fixed point lteration we can determine the convergence rate of Newton s Method applied to the equation x 0 where we assume that f is continuously differentiable near the exact solution f and that 1 exists near f Using Taylor s Theorem we obtain 5k1 Maim m 7 7 k in HM WW 5k i 10 109 f MW 90k f l wm 90 a 16 iiaw 7 m Wow 7 at musk grassed M5 ere 21chi f 5k 52 mm k where 5k is between zk and m We conclude that if f m 31 0 then Newton s Method converges quadratically with asymptotic error constant f m2f m It is easy to see from this constant however that if f z is very small or zero then convergence can be very slow or may not even occur The Secant Method The convergence rate of the Secant Method can be determined using a result which we will not prove here stating that if zk 10 is the sequence of iterates produced by the Secant Method for solving x O and if this sequence converges to a solution f then for k sufficiently large lmk 7 fl z Slmk 7 fllzkA 7 fl for some constant S We assume that converges to f of order 04 Then dividing both sides of the above relation by lmk 7 fl we obtain lk1 fl 1 i 90 1 Because 04 is the rate of convergence the left side must converge to a positive constant C as k 7 00 It follows that the right side must converge to a positive constant as well as must its reciprocal In other words there must exist positive constants 01 and 02 z Slmk 7 flliu lxkA 7 fl lxk7fl lmk7zl0quot1 190M l 701 7gt Cg 190M l This can only be the case if there exists a nonzero constant B such that lm i kl 7 lki tl t71 19 W71 fl 7 W71 901 7 1oz71 and 043 Eliminating B we obtain the equation a27a710 which implies that which has the solutions x 70618 041 2 5 x 1618 W 1 2 5 Since we must have 04 gt 1 the rate of convergence is 1618 Accelerating Convergence Suppose that a sequence m io converges linearly to a limit f in such a way that if k is sufficiently large then zk 7 f has the same sign that is converges monotonically to f It follows from the linear convergence of that for sufficiently large k n2 90 N n1 90 zk17f mk7m 39 Jim Lambers Math 105A Summer Session I 200304 Lecture 16 Notes These notes correspond to Sections 48 and 49 in the text Other Integration Problems In this section we discuss integration problems that pose various dif culties and how such dif culties can be addressed Tabular Data The methods we have discussed so far have assumed that the integrand f can be evaluated at any point within the interval 11 Unfortunately this is not the case if the integrand is described by tabular data that is gathered at a nite set of points In such cases one can use piecewise interpolation to construct a function that ts this data Then this function s integral can be computed using a composite quadrature rule consisting of interpolatory quadrature rules applied to each piece Improper Integrals An improper integral is an integral in which either of the limits of integration or the integrand itself is unbounded Although such integrals can not be de ned as limits as Riemann sums it is sometimes possible to use other limiting processes to obtain well de ned values We rst consider the integral 1m ma The value of this integral is given by b 1m hm z dz Hoe a provided 1 is integrable on any nite interval 11 and the above limit exists and is nite If it does then for any 6 gt 0 there exists a value of I such that I 35 dz lt6 Im e bfltzgtdz Therefore 1 can be approximated by evaluating the nite integral obtained by choosing a suf ciently large value for b An alternative approach is to use a transformation of the integration variable to obtain an integral that has nite limits Next we consider the integral 7 1m mm where the integrand x has a singularity at some point c 6 11 and is therefore unbounded at c We can de ne the value of f to be 397 b may fmd61316 modes provided 1 is integrable on the indicated intervals and the limits exist and are nite Such an integral is most ef ciently evaluated by using a transformation of the integration variable to obtain an integrand that does not have a singularity or by subtracting the integral of an analytically integrable function that has the same singularity as Double Integrals As many problems in scienti c computing involve two dimensional domains it is essential to be able to compute integrals over such domains Such integrals can be evaluated using the following strategies o If a two dimensional domain 9 can be decomposed into rectangles then the integral of a function fz y over 9 can be computed by evaluating integrals of the form 10 Abj mdydz Then to evaluate f one can use a Cartesian product rule whose nodes and weights are obtained by combining onedimensional quadrature rules that are applied to each dimension For example if functions of z are integrated along the line between z a and z b using nodes z and weights w for i 1 n and if functions of y are integrated along the line between y c and y 1 using nodes yi and weights 2 for i 1 m then the resulting Cartesian product rule Qnmf Z Z fi7yjw12j i1 j1 has nodes 13 and corresponding weights wizj for i 1 n and j 1 m Jim Lambers Math 105A Summer Session I 200304 Lecture 8 Notes These notes correspond to Section 31 in the text Interpolation Calculus provides many tools that can be used to understand the behavior of functions but in most cases it is necessary for these functions to be continuous or differentiable This presents a problem in most real applications in which functions are used to model relationships between quantities but our only knowledge of these functions consists of a set of discrete data points where the data is obtained from measurements Therefore we need to be able to construct continuous functions based on discrete data The problem of constructing such a continuous function is called data tting In this lecture we discuss a special case of data tting known as intennolatz39on in which the goal is to nd a linear combination of 71 known functions to t a set of data that imposes n constraints thus guaranteeing a unique solution that ts the data exactly rather than approximately The broader term constraints is used rather than simply data points since the description of the data may include additional information such as rates of change or requirements that the tting function have a certain number of continuous derivatives When it comes to the study of functions using calculus polynomials are particularly simple to work with Therefore in this course we will focus on the problem of constructing a polynomial that in some sense ts given data We rst discuss some algorithms for computing the unique polynomial pnm of degree n that satis es yi t 0 n where the points xi are given The points m0 m1 mn are called intemolatz39on points The polynomial pnm is called the intemolatmg polynomialof the data mo yo ml yl zn At rst we will assume that the interpolation points are all distinct this assumption will be relaxed in a later lecture Computing the Interpolating Polynomial If the interpolation points m0 mn are distinct then the process of nding a polynomial that passes through the points xi yi t 0 n is equivalent to solving a system of linear equations Ax b that has a unique solution However different algorithms for computing the interpolating polynomial use a different A since they each use a different basis for the space of polynomials of degree n The most straightforward method of computing the interpolation polynomial is to form the system Ax b where b yi t 0 n and the entries of A are de ned by a pji ij 0 n where m0 m1 mn are the points at which the data y0y1 yn are obtained and pjx x7 j O 1 n The basis 1x x of the space of polynomials of degree n 1 is called the monomial basis and the corresponding matrix A is called the Vandermonde matrix for the points x0x1 xn Unfortunately this matrix can be ill conditioned especially when interpolation points are close together In Lagrange interpolation the matrix A is simply the identity matrix by Virtue of the fact that the interpolating polynomial is written in the form 36 A s V H M QC x h V N A a Y 70 where the polynomials Eng Ego have the property that 1 ifi j gt739 quot 0 ifiy j The polynomials Eng j O n are called the Lagrange polynomials for the interpolation points x0 x1 xn They are de ned by r L xixk Em m H mlim OJv 39 7 k w As the following result indicates the problem of polynomial interpolation can be solved using Lagrange polynomials Theorem Let x0x1 xn be it 1 distinct numbers and let be a function de ned on a domain containing these numbers Then the polynomial de ned by 71 MM Z fml w j0 is the unique polynomial of degree n that satis es mm my j 0 1 The polynomial pnx is called the interpolating polynomial of We say that pnx interpolates x at the points x0x1 xn Interpolation Error In some applications the interpolating polynomial pn is used to t a known function x at the points x0 xn usually because x is not feasible for tasks such as differentiation or integration that are easy for polynomials or because it is not easy to evaluate x at points other than the interpolation points In such an application it is possible to determine how well pnx approximates From Taylor s theorem we obtain the following result Theorem Inteijnolation error ff is n1 times continuously di erentiable on a b and pnx is the unique polynomial of degree n that inteijnolates at the n 1 distinct points x0 x1 xn then for each x 6 ab fltn1 f95 19n90 713095 95DW7 where 6 ab It is interesting to note that the error closely resembles the Taylor remainder ls it possible to choose the interpolation points so that the error is minimized To answer this question we introduce the C hebysheu polynomials Tkx coskcos 1x which satisfy the threeterm recurrence relation Tk QxT x 7 Tk1x T0x E 1 T1x E x These polynomials have the property that g 1 on the interval 711 while they grow rapidly outside of this interval Furthermore the roots of these polynomials lie within the interval 71 1 Therefore if the interpolation points x0 x1 xn are chosen to be the images of the roots of the n 1st degree Chebyshev polynomial under a linear transformation that maps 71 1 to a b then it follows that 1 7j S 1 x 6 ab 0 7 Therefore the error in interpolating x by an nth degree polynomial is determined entirely by fltn1 Neville7s Method While the Lagrange polynomials are easy to compute they are difficult to differentiate or inte grate Furthermore if new interpolation points are added all of the Lagrange polynomials must be recomputed Unfortunately it is not uncommon in practice to add to an existing set of in terpolation points It may be determined after computing the kth degree interpolating polynomial p x of a function x that pkx is not a sufficiently accurate approximation of x on some domain Therefore an interpolating polynomial of higher degree must be computed which requires additional interpolation points Jim Lambers Math 105A Summer Session I 200304 Lecture 11 Notes These notes correspond to Sections 41 and 42 in the text Numerical Differentiation We now discuss the other fundamental problem from calculus that frequently arises in scienti c applications the problem of computing the derivative of a given function Finite Difference Approximations Recall that the derivative of fz at a point me denoted f z0 is de ned by f900 h 7 Wm 1 f 350 hill h This de nition suggests a method for approximating f z0 If we choose h to be a small positive constant then 30 x we h 7 mo This approximation is called the forward di erence formula To estimate the accuracy of this approximation we note that if f z exists on memo h then by Taylor s Theorem fm0h fz0 f z0hf h22 where E 6 0 z0 h Solving for f z0 we obtain fmo 900 h 900 wh h 2 so the error in the forward difference formula is 0h We say that this formula is rst order accurate The forward difference formula is called a nite di erence approximation to f z0 because it approximates f m using values of fz at points that have a small but nite distance between them as opposed to the de nition of the derivative that takes a limit and therefore computes the derivative using an in nitely small77 value of h The forward difference formula however is just one example of a nite difference approximation If we replace h by 7h in the forward difference formula where h is still positive we obtain the backward di erence formula Hm 950 r 5900 r 71 Like the forward difference formula the backward difference formula is rst order accurate If we average these two approximations we obtain the centered dz erence formula 350 g f900 h27hf0 h To determine the accuracy of this approximation we assume that f m exists on the interval m0 7 h 0 h and then apply Taylor s Theorem again to obtain 350 f0fm0h f 900h2 fm5h3 2 6 7 f07 f07fm0h f g0h27 f 65h3 where 5 6 memo h and E 6 m0 7 hz0 Subtracting the second equation from the rst and solving for f z0 yields f0hf0h 7 f fm 7kz f 900 2h 12 v By the Intermediate Value Theorem f m must assume every value between f and f on the interval 5 5 including the average of these two values Therefore we can simplify this equation to W f0h fmoih f 5 2 1 0 f Th 7 where E E moi1 0h We conclude that the centered difference formula is second order accurate This is due to the cancellation of the terms involving f m0 While Taylor s Theorem can be used to derive formulas with higher order accuracy simply by evaluating x at more points this process can be tedious An alternative approach is to compute the derivative of the interpolating polynomial that ts x at these points Speci cally suppose we want to compute the derivative at a point mo using the data 7jyyij7 7L17y717 90073407 9517907 7 901mm wherej and k are known nonnegative integers mj lt m H lt lt zk1 lt zk and y for i ij k Then a nite difference formula for f z0 can be obtained by analytically computing the derivatives of the Lagrange polynomials Emm gij for these points where n j k 1 and the values of these derivatives at 0 are the proper weights for the function values y yk lf x is n 1 times continuously differentiable on 77 m then we obtain an approximation of the form k n1 k fQEO 39Z39yi aiWO t 11 900 i 9007 17 171 where E E z mkl Among the best known nite difference formulas that can be derived using this approach is the second order accurate threepoint formula We 73fltxogt 4M0 h 7 mo 2h M5 2 5 6 0350 2 2h 3 which is useful when there is no information available about x for x lt x0 If there is no information available about x for x gt x0 then we can replace h by 7h in the above formula to obtain a second order accurate threepoint formula that uses the values of x at x0 x0 7 h and x0 7 2h Another formula is the vepoint formula 900Qh8f0h8foh f9002h f55h4 fmo 1 30 7 5 E 900 i 2717900 2hl7 which is fourth order accurate The reason it is called a vepoint formula even though it uses the value of x at four points is that it is derived from the Lagrange polynomials for the ve points x0 7 2h x0 7 h x0 x0 h and x0 2h However xo is not used in the formula because 0x0 0 where 40x is the Lagrange polynomial that is equal to one at x0 and zero at the other four points If we do not have any information about x for x lt x0 then we can use the following vepoint formula that actually uses the values of x at ve points fmo 725 x0 48 x0 h 7 36 x0 2h16 x0 3h 7 3mm 4h flt5gt5 h4 12h 5 7 where E E x0x0 4h As before we can replace h by 7h to obtain a similar formula that approximates f x0 using the values of x at x0 x0 7 h x0 7 2h x0 7 3h and x0 7 4h The strategy of differentiating Lagrange polynomials to approximate derivatives can be used to approximate higher order derivatives For example the second derivative can be approximated using a centered difference formula fQEO 900 h Wig f900 h which is second order accurate In a practical implementation of nite difference formulas it is essential to note that roundoff error in evaluating x is bounded independently of the spacing h between points at which x is evaluated It follows that the roundoff error in the approximation of f x actually increases as h decreases because the errors incurred by evaluating x are divided by h Therefore one must choose h suf ciently small so that the nite difference formula can produce an accurate approximation and suf ciently large so that this approximation is not too contaminated by roundoff error Jim Lambers Math 105A Summer Session I 200304 Lecture 3 Notes These notes correspond to Section 13 in the text Approximations in Numerical Analysis Mathematical problems arising from scienti c applications present a wide variety of dif culties that prevent us from solving them exactly This has led to an equally wide variety of techniques for computing approximations to quantities occurring in such problems in order to obtain approximate solutions In this lecture we will describe the types of approximations that can be made and learn some basic techniques for analyzing the accuracy of these approximations Sources of Approximation Suppose that we are attempting to solve a particular instance of a problem arising from a mathe matical model of a scienti c application We say that such a problem is well posed if it meets the following criteria 0 The problem has a unique solution 0 A small perturbation in the problem data results in a small perturbation in the solution ie the solution depends continuously on the data By the rst condition the process of solving a well posed problem can be seen to be equivalent to the evaluation of some function f at some known value a where z represents the problem data Since in many cases knowledge of the function f is limited the task of computing x can be viewed at least conceptually as the execution of some possibly in nite sequence of steps that solves the underlying problem for the data m The goal in numerical analysis is to develop a nite sequence of steps ie an algorithm for computing an approximation to the value There are two general types of error that occur in the process of computing this approximation to m 1 data error is the error in the data m In reality numerical analysis involves solving a problem with approximate data i The exact data is often unavailable because it must be obtained by measurements or other computations that fail to be exact due to limited precision In addition data may be altered in order to simplify the solution process 2 computational error refers to the error that occurs when attempting to compute Ef fectively we must approximate x by the quantity where f is a function that ap proximates f This approximation may be the result of truncation which occurs when it is not possible to evaluate f exactly using a nite sequence of steps and therefore a nite sequence that evaluates 1 approximately must be used instead This particular source of computational error will be discussed in this lecture Another source of computational error is roundoff error which was discussed in the previous lecture Basics of Error Analysis lntuitively it is not dif cult to conclude that any scienti c computation can include several ap proximations each of which introduces error in the computed solution Therefore it is necessary to understand the effects of these approximations on accuracy The study of these effects is known as error analysis Error analysis will be a recurring theme in this course In this lecture we will introduce some basic concepts that will play a role in error analyses of speci c algorithms in later lectures Forward Error and Backward Error Suppose that we compute an approximation 37 of the value 37 fx for a given function f and given problem data x Before we can analyze the accuracy of this approximation we must have a precisely de ned notion of error in such an approximation We now provide this precise de nition De nition Forward Error Let x be a real number and let 1 R gt R be a function If 37 is a real number that is an approximation to 37 then the forward error in 37 is the di erence Ay 37 7 37 Ify 7 0 then the relative forward error in 37 is de ned by 29 r y y y Clearly our primary goal in error analysis is to obtain an estimate of the forward error Ay Un fortunately it can be dif cult to obtain this estimate directly An alternative approach is to instead view the computed value 37 as the exact solution of a problem with modi ed data ie 37 x where a is a perturbation of x De nition Backward Error Let x be a real number and let 1 R gt R be a function Suppose that the real number 37 is an approximation to 37 and that 37 is in the range of f that is 37 for some real number 32 Then the quantity Ax e 7 x is the backward error in If x 7 0 then the relative forward error in 37 is de ned by E The process of estimating Am is known as backward error analysis As we will see this estimate of the backward error in conjunction with knowledge of f can be used to estimate the forward error As discussed in the previous lecture oating point arithmetic does not follow the laws of real arithmetic This tends to make forward error analysis dif cult In backward error analysis however real arithmetic is employed since it is assumed that the computed result is the exact solution to a modi ed problem This is one reason why backward error analysis is sometimes preferred ln analysis of roundoff error it is assumed that z op y m op y1 6 where op is an arithmetic operation and 6 is an unknown constant satisfying 6 emach From this assumption it can be seen that the relative error in z op y is In the case of addition the relative backward error in each operand is also Sensitivity and Conditioning In most cases the goal of error analysis is to obtain an estimate of the forward relative error i 7 fmfm but it is often easier to instead estimate the relative backward error i 7 Therefore it is necessary to be able to estimate the forward error in terms of the backward error The following de nition addresses this need De nition Condition Number Let m be a real number and let 1 R gt R be a function The absolute condition number denoted by nabs is the ratio of the magnitude of the forward error to the magnitude of the backward error W 7 my quotLabs If x 7 0 then the relative condition number of the problem of computing y m denoted by Mel is the ratio of the magnitude of the relatiue forward error to the magnitude of the relatiue backward error lltrlta2gtefltzgtwltzgtl lAyyl W W 9094 lAHEel39 lntuitively either condition number is a measure of the change in the solution due to a change in the data Since the relative condition number tends to be a more reliable measure of this change it is sometimes referred to as simply the condition number If the condition number is large eg much greater than 1 then a small change in the data can cause a disproportionately large change in the solution and the problem is said to be ill conditioned or sensitive If the condition number is small then the problem is said to be well conditioned or insensitiue Since the condition number as de ned above depends on knowledge of the exact solution m it is necessary to estimate the condition number in order to estimate the relative forward error To that end we assume for simplicity that f R a R is differentiable and obtain lmyl lyMl ladW M 7 WM lfMMl lzf wmel limml WW 1 Therefore if we can estimate the backward error Am and if we can bound 1 and 1quot near m we can then bound the condition number and obtain an estimate of the relative forward error Of course the condition number is unde ned if the exact value fz is zero In this case we can instead use the absolute condition number Using the same approach as before the absolute condition number can be estimated using the derivative of 1 Speci cally we have aabs z 39rel Conditioning Stability and Accuracy Determining the condition or sensitivity of a problem is an important task in the error analysis of an algorithm designed to solve the problem but it does not provide suf cient information to determine whether an algorithm will yield an accurate approximate solution Recall that the condition number of a function 1 depends on among other things the absolute forward error i 7 However an algorithm for evaluating x actually evaluates a function f that approximates f producing an approximation 37 to the exact solution y In our de nition of backward error we have assumed that i for some i that is close to m ie our approximate solution to the original problem is the exact solution to a nearby problem This assumption has allowed us to de ne the condition number of 1 independently of any approximation f This independence is necessary because the sensitivity of a problem depends solely on the problem itself and not any algorithm that may be used to approximately solve it ls it always reasonable to assume that any approximate solution is the exact solution to a nearby problem Unfortunately it is not It is possible that an algorithm that yields an accurate approximation for given data may be unreasonably sensitive to perturbations in that data This leads to the concept of a stable algorithm an algorithm applied to a given problem with given data m is said to be stable if it computes an approximate solution that is the exact solution to the same problem with data i where i is a small perturbation of m It can be shown that if a problem is well conditioned and if we have a stable algorithm for solving it then the computed solution can be considered accurate in the sense that the relative error in the computed solution is small On the other hand a stable algorithm applied to an ill conditioned problem cannot be expected to produce an accurate solution Jim Lambers Math 105A Summer Session I 200304 Lecture 6 Examples These examples correspond to Sections 23 and 24 in the text Example We use the Method of Regula Falsi False Position to solve x 0 where x x272 First we must choose two initial guesses x0 and x1 such that 1 changes sign between x0 and x1 Choosing x0 1 and x1 15 we see that xo 1 71 and x1 15 025 so these choices are suitable Next we use the Secant Method to compute the next iterate x2 by determining the point at which the secant line passing through the points x0 x0 and x1 x1 intersects the line y 0 We have 7 f960951 900 951 i 950 enusin 025 7 71 2 Computing xg we obtain 14 7004 lt 0 Since x2 lt 0 and x1 gt O we can use the Intermediate Value Theorem to conclude that a solution exists in the interval x2 x1 Therefore we compute x3 by determining where the secant line through the points x1 x1 and xg x2 intersects the line y 0 Using the formula for the Secant Method we obtain f901902 901 3 3 awrm 02514 715 7 70047025 141379 Since xg lt 0 and x2 lt 0 we do not know that a solution exists in the interval x2x3 However we do know that a solution exists in the interval x3 x1 because x1 gt 0 Therefore instead of proceeding as in the Secant Method and using the Secant line determined by x2 and x3 to compute x4 we use the secant line determined by x1 and x3 to compute x4 By the Intermediate Value Theorem and the fact that this secant line intersects the graph of x at x x1 and x x3 we can conclude that x4 must fall between x3 and x1 so then we Jim Lambers Math 105A Summer Session I 200304 Lecture 12 Notes These notes correspond to Sections 43 and 44 in the text Integration Numerous applications call for the computation of the integral of some function f R a R over an interval ab b 1m mdm In some cases 1 can be computed by applying the Fundamental Theorem of Calculus and computing 11 Fb Fa7 where is an antideriuatiue of 1 Unfortunately this is not practical if an antiderivative of f is not available In such cases numerical techniques must be employed instead WellPosedness The integral 1 is de ned using sequences of Riemann sums 7L Rn ZHQXMH 9007 90139 E 5i S 901417 i1 where a 1 lt 2 lt lt an b If any such sequence Rnfo1 converges to the same nite value R as n a 00 then f is said to be Riemann integrable on ab and f R A function is Riemann integrable if and only if it is bounded and continuous except on a set of measure zero For such functions the problem of computing 1 has a unique solution by the de nition of f To determine the sensitivity of f we de ne the oo norm of a function x by Hflloo WW and let f be a perturbation of 1 that is also Riemann integrable Then the condition number of the problem of computing 1 can be approximated by l1f71fll1fl Wif W l HfifllooHflloo HfifllooHflloo Ilffll1fl llf fllOfHflloo biaLle loolloc Hf llOOHflloo llfloo g 5 00m7 from which it follows that the problem is fairly well conditioned in most cases since if f is small we should then use the absolute condition number which is bounded by b 7 a Similarly perturbations of the endpoints a and I do not lead to large perturbations in f in most cases Numerical Quadrature Clearly if f is a Riemann integrable function and 13fo is any sequence of Riemann sums that converges to f then any particular Riemann sum Rn can be viewed as an approximation of f However such an approximation is usually not practical since a large value of 71 may be necessary to achieve sufficient accuracy Instead we use a quadrature rule to approximate f A quadrature rule is a sum of the form n QM Z mow i1 where the points m i 1 n are called the nodes of the quadrature rule and the numbers w i 1 n are the weights We say that a quadrature rule is open if the nodes do not include the endpoints a and b and closed if they do The objective in designing quadrature rules is to achieve sufficient accuracy in approximating f for any Riemann integrable function 1 while using as few nodes as possible in order to maximize efficiency In order to determine suitable nodes and weights we consider the following questions 0 For what functions f is 1 easy to compute 0 Given a general Riemann integrable function f can 1 be approximated by the integral of a function g for which 9 is easy to compute One class of functions for which integrals are easily evaluated is the class of polynomial functions If we choose 71 nodes m1 zn then any polynomial pn1m of degree n 7 1 can be written in the form pnil Z pn71i n71i7 i1 where n71iz is the ith Lagrange polynomial for the points m1 zn It follows that 17 ml pn71ltzgtdz ab n an71i nilid 1 i1 n b ani n71imdzgt i1 1 n an71iwi i1 QaltPn71 where b 10 Enil fm 61 i17l 1 are the weights of a quadrature rule with nodes m1 zn Therefore any n point quadrature rule computes f exactly when 1 is a polynomial of degree less than n For a more general function f we can use this quadrature rule to approximate f by pn71 where 19771 is the polynomial that interpolates f at the points m1 zn Quadrature rules that use the weights de ned above for given nodes m1 mn are called intemolatory quadrature rules We say that an interpolatory quadrature rule has degree of accuracy 71 if it integrates polynomials of degree n exactly but is not exact for polynomials of degree n 1 If the weights w i 1 n are nonnegative then the quadrature rule is stable as its absolute condition number can be bounded by b 7 a which is the same absolute condition number as the underlying integration problem However if any of the weights are negative then the condition number can be arbitrarily large In the remainder of this section we discuss speci c interpolatory quadrature rules NewtonCotes Quadrature The family of Newton Cotes quadrature rules consists of interpolatory quadrature rules in which the nodes are equally spaced points within the interval a b The most commonly used Newton Cotes rules are c The Midpoint Rule which is an open rule with one node is de ned by fmdmzbiafltaTbgt It is of degree one and it is based on the principle that the area under x can be approxi mated by the area of a rectangle with width b 7 a and height rm where m a b2 is the midpoint of the interval 11 0 The Trapezoidal Rule which is a closed rule with two nodes is de ned by b b7a madam 2 mama It is of degree one and it is based on the principle that the area under x from x a to x b can be approximated by the area of a trapezoid with heights a and b and width b 7 a o Simpson s Rule which is a closed rule with three nodes is de ned by firearm 5g fa4f 35 was It is of degree two and it is derived by computing the integral of the quadratic polynomial that interpolates x at the points a a b2 and b In general the degree of accuracy of Newton Cotes rules can easily be determined by expanding the integrand x in a Taylor series This technique can be used to show that n point Newton Cotes rules with an odd number of nodes have degree n which is surprising since in general interpolatory n point quadrature rules have degree n 7 1 This extra degree of accuracy is due to the cancellation of the high order error terms in the Taylor expansion used to determine the error Such cancellation does not occur with Newton Cotes rules that have an even number of nodes Unfortunately Newton Cotes rules are not practical when the number of nodes is large due to the inaccuracy of high degree polynomial interpolation using equally spaced points Furthermore for n 2 11 n point Newton Cotes rules have at least one negative weight and therefore such rules can be ill conditioned ClenshaWCurtis Quadrature An alternative to Newton Cotes rules are Clenshaw Curtis rules whose nodes are determined by Chebyshev polynomials The roots of the nth Chebyshev polynomial yield the n point classical Clenshaw Curtis rule while the interior extrema of the nth Chebyshev polynomial yields the n71 point practical Clenshaw Curtis rule The latter rule is called practical because it has the property that the nodes of the 2n 7 1 point rule include the nodes of the n point rule Such a sequence of quadrature rules that reuses nodes is said to be progressive The Clenshaw Curtis rules have several advantages over Newton Cotes rules 0 Their weights are always positive making them much more stable than Newton Cotes rules Jim Lambers Math 105A Summer Session I 200304 Lecture 1 Examples These examples correspond to Section 11 in the text Example Let the sequence fn il be de ned by fn 1n for every positive integer n Then lim fn 0 Hoe since for any 6 gt 0 no matter how small we can nd a positive integer no such that lfnl lt e for all n 2 no In fact for any given 6 we can choose n0 1el where z known as the ceiling function denotes the smallest integer that is greater than or equal to m D Example In your calculus course it is likely that on at least one occasion you had to solve a problem involving a function Tt that described the temperature at time t In a real world application such a function would not be described by a formula such as Tt 5 as in a calculus course Instead it would be described by a set of points tkTtk for k 1 2 n for some positive integer n where Ttk for each k would be obtained by measuring the temperature at time tk ln numerical analysis one would typically try to construct a continuous function that in some sense ts the given data Then the temperature at other times can be determined at least approximately by evaluating this continuous function at these times The process of inferring the value of a function at a point z based on its values at select points m1 zn is called inteijoolation when lies within the interval spanned by 1 zn whereas it is called extrapolation otherwise Once such a continuous function is constructed based on the given data one can use it to obtain approximate solutions to problems such as determining when the temperature reaches a certain value such as the boiling point or freezing point or computing the rate of change of the temperature at a particular point in time D Example Newton s Method for solving the equation x 0 is an example of an iterative method that generates a sequence of approximate solutions 73310 that ideally has a limit f as n a 00 that represents the exact solution Newton s method generates this sequence by requiring that the initial approximation mo be chosen and then computing subsequent approximations using the formula 1 n1 mi fm n 012 We will see that while the sequence does not always have a limit it typically converges to the limit very rapidly when the limit does exist This is important because in a practical application it is of little use to know that a sequence of approximations has a limit that is the exact

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.