### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# INTRONUMANALYS&COMPUT MATH541

SDSU

GPA 3.64

### View Full Document

## 27

## 0

## Popular in Course

## Popular in Mathematics (M)

This 177 page Class Notes was uploaded by Miss Gladys Lubowitz on Tuesday October 20, 2015. The Class Notes belongs to MATH541 at San Diego State University taught by Staff in Fall. Since its upload, it has received 27 views. For similar materials see /class/225276/math541-san-diego-state-university in Mathematics (M) at San Diego State University.

## Popular in Mathematics (M)

## Reviews for INTRONUMANALYS&COMPUT

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/20/15

IJIHQFquot RI Differentiation 39i 39 39 na 39 1 Numerical Integration Onealawe quotmerit s i in mic N M3 Iihw t rr di iimiaggts wmm Peter Blomgren blomgren peter gmail com Department of Mathematics and Statistics Dynamicai Systems Grou Computationai Sciences Research Center San Diego State University San Diego CA 9218277720 ht terminus5rjsuedu Fall 2009 Wiemmm Peter Blom F Numeric Outline 0 Numerical Differentiation 0 Ideas and Fundamental Tools 0 Moving Along e Richardson39s Extrapolation 0 A Nice Piece of Algebra Magic 0 Homework 6 Preliminary Version 9 Numerical Integration Quadrature O The Why and Introduction 0 Trapezoidal 84 Simpson39s Rules 0 NewtonCotes Formulas 0 Homework 6 Final Version Extrapolalio Numerical ferenli on Ideas and Fundamental Tools 3 39 m m i n Numerical In nadratui39e39 Numerical Differentiation The Big Picture The goal of numerical differentiation is to compute an accurate approximation to the derivatives of a function Given measurements 0 of the underlying function fx at the node values Xi0 our task is to estimate f x and later higher derivatives in the same nodes The strategy Fit a polynomial to a cleverly selected subset of the nodes and use the derivative of that polynomial as the approximation of the derivative Peter Blomgren Iardson39s Extrapo Num eri cal mm Numerical In DWng mu Eiiummmmn flaylb Ni ii me Denith assa Nimitz i The derivative of f at X0 is fXO 7 fX039 The obvious approximation is to fiX h small and compute fXO fX0 7 fX0 Cancellation and roundoff errors For small values of h fx0 h z fx0 so the difference may have very few significant digits in finite precision arithmetic sma er 7 not necessar be r ruw39nei39ically hi Peter Blomgren l Num eri cal erenlialion rap 1 Numerical I on QHadramre Mam m Iaine In the discussion on Numerical Differentiation and later Integration we will rely on our old friend nemesis the Taylor expansions Theowm Tagibis Wren W l Suppose f E C a7 b f 13 on 37 b andxo 6 37 b Then VX E a b 35X E minxox7 maXX0X With fx Pnx l Rnx Where pnX iWW my Rnx fintlljfgllltxi X0n1V k0 Pnx is the Taylor polynomial of degree n and Rnx is the remainder term truncation error Peter Blomgren Ideas and Fundamental Tools 4 39 Ricllal Numerical Inter Main Tools for Numerical Differentiation Our second tool for building Differentiation and Integration schemes are the Lagrange Coefficients Xin LnkX j0k Xk 7 X1 Recall Lnkx is the nth degree polynomial which is 1 in Xk and O in the other nodes 97 j 7 k Previously we have used the family Ln0x Ln1x Lnnx to build the Lagrange interpolating polynomial A good tool for discussing polynomial behavior but not necessarily for computing polynomial values cf Newton39s divided differences Now lets combine our tools and look at differentiation m Peter Blomgren ichardson39s Extrapolalio Numerical Di enlialion pc 1 Numerical Int auo Quadrature Getting an Error Estimate Taylor Expansion fX h a i 2 hfXo h72f 5x 7 2 fXO 4500 If f x is bounded ie if Xi lt M V500 E X07X0 h then we have Mihi X0 h 7 axo with an error less than N f X0 quot h 7 This is the approximation error RoundofF error N 6mm 10 16 not taken into account ichardson 39s Extrapolalio Peter Blomgrei Numerical erenlialion rap 391 Numerical I on CJnadraturel Using Higher Degree Polynomials to get Better Accuracy Suppose X07X17 7Xn are distinct points in an interval I and f E C 1I we can write quot7 X 7 X n ax 2 WM i W Wax l Lagrange lnterp Poly Em Term Formal differentiation of this expression gives n d Hquot X 7 Xk 7 i R70 f x a gamma dx i 1 n 1 fltn1gtsxl Note When we evaluate f xj at the node points the last term gives no contribution we don39t have to worry about it f 1 x Peter Blomgren ichardson39s Extrapolali Num Ricllai Numerical Inter Exercising the Product Rule for Differentiation ii l erixm mX7X1XX2X7XnX7XoXX2XXn 1 n n n12 H X Xk 10 k0kj Now if we let X X for some particular value of 6 only the product which skips that value ofj Z is non zero eg 1 n n 1 n 7 2 H X 7 Xk 7 H X 7 Xk n 1 F0 k0kj XXZ n 1 ak m Peter Blomgren ichardson39s Extrapolalio Ricllai Numerical Inter The n l 1 point formula for approximating f X Putting it all together yields what is known as the n l 1 point formula for approximating f xj X39 i X X39 i nJrlME l n X397X l fininiankuH n1 Lilk nj llxll kc Note The formula is most useful when the node points are equally spaced it can be computed once and stored ie Xk X0 kh Now we have to compute the derivatives of the Lagrange coefficients ie Lnkx We can no longer dodge this taskl m Peter Blomgren ichardson39s Extrapolalio Numerical Di enlialion pc 1 Numerical lnL auo Quadrature Example 3 point Formulas llll Building blocks 7 X7X1X7X2 X 7 2X7X17X2 LZ lel 7 X0 X1X0 X27 LZ Ol 7 X0 X1X0 X2 W WM 7 2X7X07X2 L21X 7 X1 7 X0X1 7 X2 7 X7X0X7X1 X 7 2X7X07X1 LN X2 7x1 L x2 XOXXZ 7x 2XJ397X17X2 2XJ397X07X2 flxo lX0 X1X0 X2l l flxl lX1 X0X1 X2l 2X1 X0 X1 1435 2 l le lX2 X0X2 Xlll l Formulas f Xj 6 X1 Xk 9 J Peter Blomgrel Numerical erenlialion rap 391 Numerical I loll Onadrat y Example 3 point Formulas lllll When the points are equally spaced 1 h2 Wm kmewmramHWn 1 h2 3 7 X1 4Xo fgtlt2l g7 51 Hm mmimmmmwgmm Peter Blomgren ichardson39s Extrapolali Numerical erenlialion rap 391 Numerical I loll Onadramrel Example 3 point Formulas lllll When the points are equally spaced 1 h2 mo 73am W1 7 MM 3150 m1 i 4 m1 7 5243151 27 6 1 h2 1 WM 7 W1 3fx21 f352 Use X0 as the reference point Xk X0 kh 1 h2 fXo l 3fX0 Xe h Xo 27 f350 1 h2 3 f x0 h g 7fxo fx0 27 7 K1 51 x0 2h 2quotx0 7 4m h 32quotx0 2h lnggz Peter Blomgren ichardson39s Extrapolali Numerical erenlialion rap 391 Numerical I loll CJnadrat i Example 3 point Formulas IIIIII mo 5 73 4m h e axe 2h1 ngso me h g Hm axe 2017 ngl x0 2h 2quotx0 7 4m h 3m 2h giakgz Make the substitution X0 h A X5 in the second equation Peter Blomgren ichardson39s Extrapolali Numerical erenlialion rap 391 Numerical I loll CJnadrat i Example 3 point Formulas IIIIII mo 7 5 73 4m h 7 axe 2h1 gm ma 7 7mg 7 h 7 x 7 h17flt3gt51 2 7 2h 7 g M 7 4m h 3m 2h1 QZWZ After the substitution X0 h 7gt X5 in the second equation Next make the substitution X0 27 7gt X0 in the third equation Peter Blomgren ichardson39s Extrapolali Numerical erenlialion rap 391 Numerical I loll CJnadrat i Example 3 point Formulas IIIIII f xo 7 g 73 4fxo h 7 axe 7 MM gm ma 7 2177 7mg 7 h 7 x 7 I017 flt3gt51 fxg 7 i M 7 2h 7 4m 7 h 7 3mm 7 h Ms 2h 2 After the substitution X0 h 7gt X5 in the second equation and X0 27 7gt x5r in the third equation Peter Blomgren ichardson39s Extrapolali Example 3 point Formulas IIIIII f xo 7 g 73 4fxo h 7 axe 7 MM g wso f 7 1 2 3 x0 7 E 7fx0 7 h fx0 h 7 gf 2 mt 7 g M 7 2h 7 421x 7 h 7 3am 7 gum After the substitution X0 h 7gt X3 in the second equation and X0 27 7gt x5r in the third equation Nute1 The third equation can be obtained from the first one by setting h 7i 7h Nute2 The error is smallest in the second equation Nute3 The second equation is a twosided approximation the first and third one sided approximations Nute4 We can drop the superscripts m l r a u l Moving Along F Numeric 3 point Formulas Illustration Centered Formula r a u 1 Moving Along Forward Formula 3 point Formulas Illustration Backward Formula f x0 7 2717fx0 7 2h 7 4fx0 7 h 3fx0 lgfmgg 5351 Nun 39 Rich Numerical Im 5 5 point Formulas If we want even better approximations we can go to 4 point 5 point 6 point etc formulas The most accurate smallest error term 5 point formula is fXO W f55 Sometimes eg for end point approximations like the clamped splines we need one sided formulas fXO 725fxo48fxoh736fxfrh2h16fxo3h73fxo4h 1 154 5351 5 point Formulas Illustration Centered Formula fXO W f55 Num eri cal art 1 Numerical In Higher Order Derivatives We can derive approximations for higher order derivatives in the same way Fit a kth degree polynomial to a cluster of points Xi7 fX1 and compute the appropriate derivative of the polynomial in the point of interest The standard centered approximation of the second derivative is given by fx0 fX0 h 7 2250 fX0 7 h O 72 Peter Blomgren Iardson39s Extrapo Num eri cal ferenli on quot mm C pct Numerical In nadratul We now have the tools to build high order accurate approximations to the derivative We will use these tools and similar techniques in building integration schemes in the following lectures Also these approximations are the backbone of finite difference methods for numerical solution of differential equations see Math 542 and Math 693b Next we develop a general tool for combining low order accurate approximations to derivatives integrals anything almost in order to hierarchically constructing higher order approximations Peter Blomgren D 9quot quot quot39 A Nice Piece of Alkehra Mamiequot ii Ii u v I n Ric Numerical InLe lloll Quadrature Richardson39s Extrapolation What it is A general method for generating high accuracy results using low order formulas Extrapolalio n Du enlmlion Ric 39s Extrapolation Numerical Inle lloll Quadratier A Nice Piece of Alkehra Mamiequot xquot l u v H Richardson39s Extrapolation What it is A general method for generating high accuracy results using low order formulas Applicable when The approximation technique has an error term of predictable form eg 00 M 7 VJh Z Ekhquot kj where M is the unknown value we are trying to approximate and the approximation which has an error 9hj Extrapolalio z Richardso Numerical In on Quadrature Richardson39s Extrapolation What it is A general method for generating high accuracy results using low order formulas Applicable when The approximation technique has an error term of predictable form eg 00 M 7 VJh Z Ekhquot kj where M is the unknown value we are trying to approximate and the approximation which has an error 9hj Procedure Use two approximations of the same order but with different h eg MM and Combine the two approximations in such a way that the error terms of order h cancel m Peter Blomgren Iardson39s Extrapo h 1 DH enlmlion c 39s Extrapolation A Nice Piece of Alkehra Mamiequot L a v i q Ri Numerical Inle lloll Quadrature Building High Accuracy Approximations Consider two first order approximations to M M 7 MM Z Ekhquot k1 m hk M7 May2 E 5k k1 Extrapolalio s 7 no Quadrature Building High Accuracy Approximations Consider two first order approximations to M M 7 MM 7 Z Ekhquot k1 and Do hk M 7 May2 7 Z Ek27i k1 If we let N2h 2N1h2 7 N1h then M 7 MM 7 2515 7 E1hn Emhk 2 k 7 7 0 1 52Eklt2fl71gti Hence MM is now a second order approximation to M where Peter Blomgren z Ri chardso Numerical In ion Quadrature Building High Accuracy Approximations We can play the game again and combine N2h with N2h2 to get a thirdorder accurate approximation etc 4N2h2 7 MM 3 N307 N2h2 N2h237 N207 Peter Blomgren Iardson39s Extrapo wenlmlion Extrapolation on Quadrature Building High Accuracy Approximations We can play the game again and combine N2h with N2h2 to get a thirdorder accurate approximation etc 4N2h2 7 MM 7 3 N307 N2h2 N2h237 N207 N4h N3h2 N3h27 N3 N5h N4h2 N4h2241N4h In general combining two jth order approximations to get a j 1st order approximation 550 Peter Blomgren ichardson39s Extrapolali h 1 DH enlmlion c 39s Extrapolation A Nice Piece of Alkehra Mamiequot L a v i q Ri Numerical Inle lloll Quadratier Building High Accuracy Approximations Let39s derive the general update formula Given M 7 MM EJhi 9 W1 h MiNjh2 Ej 0h Extrapolalio z Ri chardso Numerical In ion Quadrature Building High Accuracy Approximations Let39s derive the general update formula Given M 7 MM EJhi 9 W1 hi M IVJh2 E1 0W We let Nj1h aj VjW Bijlh2 Peter Blomgren Iardson39s Extrapo Numerical Differentiation A 39 xlrapo alion I ma Numerical Integration Onadratui39el Building High Accuracy Approximations Let39s derive the general update formula Given M N107 Ejhj o W M 7 AIM2 0 W We let Nj1h aj leh Bijlh2 However if we want Nj1h to approximate M we must have 09 l B 1 Therefore hi M Nj1h ajEjh 1 i 65951 0 0 Peter Blomgrei icliardson 39s Extrapolalio h 1 DH enlmlion c 39s Extrapolation A Nice Piece of Alkehra Mamiequot L a v i q Ri Numerical Inle lloll Quadratier Building High Accuracy Approximations Now M Nj1h Ejhj a1 1 i 659 O W We want to select 09 so that the expression in the bracket is zero Extrapolalio z Extrapolation ion madmane z Richardso Numerical In Building High Accuracy Approximations M Nj1h Ejhj a1 1 i 659 O W We want to select 09 so that the expression in the bracket is zero This gives 7 71 7 2i 721711 1 Wm I QJ H W t Peter Blomgren Iardson39s Extrapo I lumer Ricla Numerical Integration Onadramrei A Building High Accuracy Approximations M Nj1h Ejhj a1 1 i 659 O W We want to select 09 so that the expression in the bracket is zero Thisgives 7 71 7 2i 7217117 1 al 7 I QJ E W H Therefore Min2 7 W Nj1h Vjh2 2 71 Peter Blomgrei ichardson39s Extrapolalio I Jum al D Malian Richardso Extrapolation Numerical Integzauon Quadrature A Building High Accuracy Approximations The following table illustrates how we can use Richardson39s extrapolation to build a 5th order approximation using five 1st order approximations 0h 0012 0013 0039 0015 MW N1h2 N207 N1lh4 Nzlh2 Nslh N1h8 N2h4 N3h2 N407 N1lh16 Nzlh8 Nslh ll N4lh2 N507 T Measurements T Extrapolatiuns T Peter Blomgren Richardson39s Extrapolalio 1 1 DH enlmlion c 39s Extrapolation m A Nice Piece of Alkehra Mamiequot xquot I u v 1 Numerical Ime mquot Omenazure Example cf slide14 and slide2l The centered difference formula approximating f x0 can be expressed 7 00 Xh fxih 72 m 4 fi f mom 3 N2h error term Extrapolalio z Ri chardso Numerical In ion Quadrature Example cf slide14 and slide2l The centered difference formula approximating f x0 can be expressed 7 00 Xh fxih 72 m 4 fi f mom 3 N2h error term In order to eliminate the 72 part of the error we let our new approximation be N307 Vzh2 Peter Blomgren iardson39s Extrapo A a numz Driltnnlmlion A Nice Piece of m xlrapoa loquot M L n Numerical InL on cuminmm Example cf slide14 and slide2l The centered difference formula approximating f x0 can be expressed 7 X i h 7 X 7 h 72 m 4 7 2h 76f snow N2h error term fXo In order to eliminate the 72 part of the error we let our new approximation be N307 Vzh2 f h 2 if 7h 2 f h if 7h fxh27fX7h2 J U W 2h 2X iw X l 2h2 3 8fxh278fX7h2 7 fxh7fx7h 5h 5h fx7 h 7 8fx7 h2 8fx h2 7 fx m ichardson 39s Extrapolation Peter Blomgren I Jume I Differentiation 39 39 xlrapo alion A Nice Piece of 39c ii i1 1 Numerical Int loll Onadratui39ei Example cf slide14 and slide2l The centered difference formula approximating f x0 can be expressed fxh7fxih h2 7 7 i f X0 2h 6 N2h error term 7quot 5 0074 In order to eliminate the 72 part of the error we let our new approximation be N3h Mai2H f xh if x7 7 f X2h if x72 N32h fXh2 hfX h 2h 3 Ah srxhesrxeh 7 rx2herxezh ah ah fx 7 2h 7 8fx e h 8fx h 7 fx 2hi 535 Peter Blomgren ichardson 39s Extrapolation I Jum al D lmlion Richardso Extrapolation L A Numerical I g39a Ion Omarazure Example fx X f x 2X X2ex f 2 8e2 59112 X fX 170 158197 f 21 1f 20 180 196009 64566 Fwd Difference 2pt 190 241361 W 59384 Ctr Difference 3pt 200 295562 39 210 360128 60201 Ctr Difference 2 20 436811 V 230 527634 4593847602013 59111 Richardson f 18 78f 19 8f 21 if 22 W 59111 5pt Peter Blomgren Richardson39s Extrapolalio I Jumeri Dit39 nlmlion 39 Iards 39s Extrapolation i RIC quot 39 r V Numerical Inte Ion Quadrature H quot k 5 39 P quotquotquotquot V 39 quot Wrap up Homework 6 Due Friday 116 2009 12 noon We are going to use Richardson extrapolation in combination with some of the simpler integration schemes we will develop in order to generate general schemes for numerically computing integrals up to high order Note In order to use Richardson extrapolation we must know the form of the error hence finding error terms in our approximations turns out to have a very practical use Part 1 BF415 BF4127 BF429 Extrapolalio The Whyquot and Introduction r 4 i Riel un Numerical lnle e Integration Introduction The quotWhyquot After taking calculus I thought I could differentiate andor integrate every function Extrapolalio The Whyquot and Introduction r 4 i Riel un Numerical lnle e Integration Introduction The quotWhyquot After taking calculus I thought I could differentiate andor integrate every function Then came physics mechanical engineering etc Extrapolalio The Whyquot and Introduction l 4 r 39icm 1pclatlun Numerical Int Quadrature Integration Introduction The quotWhyquot After taking calculus I thought I could differentiate andor integrate every function Then came physics mechanical engineering etc The need for numerical integration was painfully obvious Extrapolalio The Whyquot and Introduction ii 39icm 1pclatlun Numerical Int Quadrature Integration Introduction The quotWhyquot After taking calculus I thought I could differentiate andor integrate every function Then came physics mechanical engineering etc The need for numerical integration was painfully obvious Sometimes most of the time the anti derivative is not available in closed form fxdx Fx 3 WM AntirDerivative Extrapolalio Riel Numerical lnle Numerical Quadrature The basic idea is to replace integration by clever summation b n fx dx a Z Eff 7 a i0 wherea X0ltX1ltltXnltb fx The coefficients a and the nodes X are to be selected Extrapolalio The Whyquot and Introduction ii i 39icm 1pclatlun Numerical Int Quadrature Building Integration iemes with Lagrange Polynomi Given the nodes X07X17 7Xn we can use the Lagrange interpolating polynomial n n1 X n Pnx Z Ln X7 with error Enx W HX7X39 i0 39 0 to obtain firmly Abpngwx abEnxdx W W The Approximation The Error Estimate Extrapolalio F39Icl Numerical Inle nx dx 1 153 0 a i Extrapolalio The Whyquot and Introduction with error given by b b n 1 X n Efa Enxdxa ng gdx and Introduction I lum Ricliai Numerical Inle Identifying the Coefficients b b n n b n Pnxdx Z LnxdxZf LnxdxZfa a a 0 0 x a I i0 3 Hence we write b n fx dx x 2 315 3 i0 with error given by b b n 1 X n Era Enxdxa ng ndx Note Can we change the order of integration f and summation E as we did above7 In this case where we are integrating a polynomial over a finite interval it im OK For technical details see a class on real analysis eg Math 5348 i t ha Extrapola Numerical Ime Quadrature limb 3 39pe u le Let a X0 lt X1 b and use the linear interpolating polynomial P1x f0 1 X0X1 X1 X0 I lum lmlion 39ic trap t39un Numerical lnlegrauon Quadrature Example 1 Trapezoidal Rule Then b x1 7 7 for w W 9 x0 XOTXI XI XO f gxx 7 x0x 7 x1 dx The error term use the Weighted Mean Value Theorem XX1f5XXX0X 7x1 dx f X1 Xixollx 7x1 dx 0 3 X1 3 FE Xi mxz X0X1X2 if5 3 2 X0 6 wherehx17X0bia ichardson 39s Extrapolali Peter Blomgren Di Rich pcIatiun Numerical Im anon Quadrature Example 1 Trapezoidal Rule IIIIII Hence b X 7 X12 X 7 X02 X1 ha a x dx 7 f0 2X0 M f1 2 0on 7 Ef E 7 mea m ab fxdx h 7 lgfKE7 h b7 a Peter Blomgrel ichardson39s Extrapolalio I Jumer Di Alion 39 ion re 39ch pcIRF Numerical Im anon Quadram Simpson39s Rule suboptimal error bound Let X0 3 X1 521 X2 b let h and use the quadratic interpolating polynomial x0 7 x1x0 7 X2 1X1 X0X1 X2 X X0X X1 X2 X0X2 X1 X2 X XOXX 2X1 X2 f3 x dx X0 fXdX 2 fxoxi X1X ix X1 X7 Xox 7x2 XZ dx ab x dx h 0h4f35 Peter Blomgrel ichardson39s Extrapolalio I I um whim Irapezoulal fa Smut5 0quot Numerical Int Quadrature Example 2b Simpson39s Rule optimal error bound The optimal error bound for Simpson39s rule can be obtained by Taylor expanding fx about the mid point X1 m 4 x 7 rm Home 7x1 ya 7 x1 fthx 7 x113 We 7 x1 Extrapolalio I Jum 39 Ricllai Numerical lnle Example 2b Simpson39s Rule optimal error bound The optimal error bound for Simpson39s rule can be obtained by Taylor expanding fx about the mid point X1 4 x 7 x11 new 7 x1 L x 7 x1 L 1 x 7 x113 fl w J x 7 x1 2 6 24 Then formally integrating this expression H W A b mm wow 7x1 f E x 7 x1 if 6 x 7 x113 f if x 7x1 ax Ex clat39 39 Riclm p Numerical Inlegr on Quadrature Example 2b Simpson39s Rule optimal error bound The optimal error bound for Simpson39s rule can be obtained by Taylor expanding fx about the mid point X1 W A ftxfX1fX1xix1 f xljtxixnh thximh Sylvia Then formally integrating this expression W A b mm rrxntx 7x1 yuan fthxim f ifwrx 7x114 ax After use of the weighted mean value theorem and ththhe approximation f x1 xt 7 2fx1 l fx2 7 f4 and a whole lot of algebra see BF pp 189 190 we end up with I Jumz mimequot 39lcml39t pct atiun Numerical Integration Quadrature The optimal error bound for Simpson39s rule can be obtained by Taylor expanding fx about the mid point X1 4 x rm new 7x1 uiqu we 7x113 f Drxixu 2 6 24 Then formally integrating this expression W A blrx1rx1gtltxixn gxljtxixuh if xljtxixuh gflxjjrxixu l ax After use of the weighted mean value theorem and the the approximation f x1 xi 7 2fx1 l fX2 7 f4 and a whole lot of algebra see BF pp 189 190 we end up with f x dx h l W t 3m ml 7 guns m Peter Blomgren Iardson39s Extrapo F39Icl Numerical Inle Example 2 Simpson39s n 9 Rule Abfxdxh fX0 4fX1 fX2 3 0015f 5 fX px 7 Simpson s Rule Extrapolalio I Jum 39 Ricllai Numerical lnle Integration Examples fX a b f fX dX Tra pezoidal Error Simpson Error X 0 1 12 0 5 0 0 5 0 X2 0 1 13 0 5 0 16667 0 33333 0 X3 0 1 14 0 5 0 25000 0 25000 0 Xil 0 1 15 0 5 0 30000 0 20833 0 0083333 ex 0 1 e 7 1 1 8591 0 14086 1 7189 0 0005793 The Trapezoidal rule gives exact solutions for linear functions The error terms contains a second derivative Simpson39s rule gives exact solutions for polynomials of degree less than 4 The error term contains a fourth derivative Thrzmmmik39 swam 5 l mlw lila lmbh Dawes eff kd uwmgz The Degree of Accuracy or precision of a quadrature formula is the largest positive integer n such that the formula is exact for Xk Vk01n With this definition Scheme Degree of Accuracy Tra pezoidal 1 Simpson39s 3 Trapezoidal and Simpson39s are examples of a class of methods known as Newton Cotes formulas Nume al le R E auadla me NewtonCotes Formulas Two Types Closed The 7 1 point closed NCF uses nodes x X0 ih 01 l n wherexo ax band h bian It is called closed since the endpoints are included as nodes Lune enr Iztlon mu 5 Exuapolanon 6 memaquot Numemal In non umdmnue NewtonCotes Formulas Two Types Open The n 1 point open NCF uses nodes x X0 ih I017nwherehbian2and X0ah Xn b7 h We label x71 a Xn1 b Xub X I Jum 39 Ricllai Numerical lnle Closed Newton Cotes Form u las The approximation is b n fx dx x Z afX7 a i0 where X quot Xixj39 X wand39 Note The Lagrange polynomial Lnx models a function which takes the value 0 at all Xj j i and 1 at X Hence the coefficient a captures the integral of a function which is 1 in X and zero in the other node points m e rllhnuii gt m Suppose that 2270 afx denotes the n 1 point closed Newton Cotes formula With X0 a Xn an b 7 an Then there existsE 6 a7 b for which Ab fxdx 7 germ rm 7 1 t 7 ndt ifn is even and f E C 2a7 b and b n hn2fn1 n f d 7 f 7 7 d a xx gs x n1l 0tt 1 t nt ifn is odd and f E C 1a7 b Note that when n is an even integer the degree of precision is n 1 When n is odd the degree of precision is only n Peter Blo Closed Newton Cotes Formulas Examples n 2 Simpson s Rule h h5 g laxo W1 ml 7 WW n 3 Simpson s g Rule i8 lam 3fx1 3fx2 Ml 27 5 n 4 Boole s Rule 72h 7f l 32f 12f l 32f l 7f 7 78m f6 45 l X0 X1 X2 X3 04 945 5 5351 Peter Blomgren Richardson39s Extrapolalio n NewlonyColes Formulas e F39Icl Numerical Inle Open Newton Cotes Form u Ias The approximation is b Xn1 a xdxX fxdxzEganHm7 71 where Xn1 Xn 7 a LniX dX H dx x71 X0 j0 I J 1i Extrapolalio e rllhnuii gt m exists 5 6 a7 b for Which Suppose that 2270 afx denotes the n 1 point open Newton Cotes formula With X71 a Xn1 b and h b 7 an 2 Then there b 7 n hn3fn2 n1 2 7 7 a fxdx 7 germ n W A t t 1 t ndt ifn is even and f E C 2a7 b and b n hn2fn1 n1 f d f s d a xx Ea x n1l 71 tt 1 t n t ifn is odd and f E C 1a7 b Note that when n is an even integer the degree of precision is n 1 When n is odd the degree of precision is only n Peter Blo h3 n 0 2W0 gf KE n 1 3 fx0 fx1 3Thsf n 2 2w 7 fx12fx2 fst 5h 95h5 n 3 llfx0 fx1 fx211fx3 m1405 53g Peter Blomgren Richardson39s Extrapolalio Rid n Numerical lute e NewlonyColes Formulas Divide and Conquer Say you want to compute 0100 fx dx Is it a Good IdeaTIVI to directly apply your favorite Newton Cotes formula to this integral Extrapolalio Rid n Numerical lute e NewlonyColes Formulas Divide and Conquer Say you want to compute 0100 fx dx Is it a Good IdeaTIVI to directly apply your favorite Newton Cotes formula to this integral No Extrapolalio nlmlion 39C pct on Numerical Integration Quadrature Divide and Conquer Say you want to compute 0100 fx dx Is it a Good IdeaTIVI to directly apply your favorite Newton Cotes formula to this integral No With the closed 5 point NCF we have h 25 and h590 105 so even with a bound on f6 the error will be large Peter Blomgren Iardson39s Extrapo i l NewlonyCol Forum 4 y Divide and Conquer Say you want to compute 0100 fx dx Is it a Good IdeaTIVI to directly apply your favorite Newton Cotes formula to this integral No With the closed 5 point NCF we have h 25 and h590 105 so even with a bound on f6 the error will be large Better Apply the closed 5 point NCF to the integrals 4i1 fxdx7 i0124 4i then sum Composite Numerical Integrationquot next time m Peter Blomgren mmlion an 39IC pct Numerical Integration Quadrature Ho we 5 7 Fin Version Homework 6 Due Friday 1162009 12 noon Part 1 BF415 BF4127 BF429 Part 2 BF431ab BF435ab Peter Blomgren Iardson39s Extrapo Numerical Analysis and Computing Lecture Notes 11 Approximation Theory Least Squares Approximation amp Orthogonal Polynomials Peter Blomgren ltblomgrenpeter gmail comgt Department of Mathematics and Statistics Dynamical Systems roup Computational Science Rsearch Center San Diego State University San Diego CA 921827720 httpterminussdsuedu Fall 2009 Least Squares amp Orthogonal Polynomials Picking Up Where We Left Off 539 129 Discrete Least Squares l Outline 0 Discrete Least Squares Approximation 0 Quick Review 0 Example 6 Continuous Least Squares Approximation 0 Introduction Normal Equations 0 Matrix Properties 9 Orthogonal Polynomials 0 Linear Independence Weight Functions Inner Products 0 Least Squares Redux 0 Orthogonal Functions 535 129 Least Squares amp Orthogonal Polynomials Picking Up Where We Left Off Discrete Least Squares ll The Idea and f 1f1 Given the data set x where x X0X1 XnT fT we want to fit a simple model usually a low degree polynomial pmX to this data We seek the polynomial of degree m which minimizes the residual r0 Z leXi fXil2 0 Least Squares amp Orthogonal Polynomials m 329 We find the polynomial by differentiating the sum with respect to the coefficients of pmX If we are fitting a fourth degree polynomial p4X a0 81X 82X2 33X3 a4X4 we must compute the partial derivatives wrt a0a1 a2a3 34 In order to achieve a minimum we must set all these partial derivatives to zero In this case we get 5 equations for the 5 unknowns the system is known as the normal equations 535 429 Least Squares amp Orthogonal Polynomials The Normal Equations Second Derivation Discrete Least Squares A Simple Powerful Method Last time we showed that the normal equations can be found with purely s a Linear Algebra argument Given the data points and the model here glven the data set x70 Where X X07X17 39 39 39 7X and p400 we write down the overdetermined system f 1 071 177fn we can quickly find the best polynomial fit for any specified polynomial degree 30 a1Xo azx 33x8 a4X6 fo 2 3 4 7 30 T alxl T W12 T 33X13 T 34X 7 1 Notation Let x1 be the vector X yxiy 9d 30 a1X2 32X2 33X2 a4X2 1 2 Eg to compute the best fitting polynomial of degree 4 a0 a1Xn 32an 33X 34X fn p4X a0 a1X 82X2 33X3 a4X4 define We can write this as a matrix vector problem xa f l l l l l where the Vandermonde matrix X is tall and skinny By multiplying X 1 i i2 i3 i4 7 and compute 5 XTX 1XTf both the left and right hand sides by XT the transpose of X we get a l l Not like this ll ii 39 square system we recover the normal equations See matth T T X Xa X f m m Least Squares amp Orthogonal Polynomials 7 529 Least Squares amp Orthogonal Polynomials 7 629 Example Fitting pX i 071727374 Models Introduction Defining the Problem Up until now Discrete Least Squares Approximation applied to a collection of data Figure We reVISIt the ex ample from last time and fit polynomials up to degree four to the given data The figure shows the best p0x p1x and p2x fits Now Least Squares Approximation of Functions We consider problems of this type Below the errors give us dues when to stop Underiymgfumonfxgt1WWW Suppose f E C a7 b and we have the class 73 fggdma which is the set of a polynomials of degree at most ggga j s tm n Find the pX E 7 which minimizes Model Sumofsquareserror I p0 x 2054 Table Clearly in this example there isvery b p1 x 5238 little to gain in terms of the leastsquares pX 7 fX2 dX39 p2 X error by going beyond lst or 2nd degree a P3 X P4 x 5179 m de39539 m Least Squares amp Orthogonal Polynomials 7 729 Least Squares amp Orthogonal Polynomials 7 329 Finding the Normal Equations lf pX E 7 we write pX 20 akxl The sum of squares error as function of the coefficients 5aoa1an is b n 2 E5 akxk 7 fX dx a 10 Differentiating with respect to aj j O1 n gives 355 055 7 b b QZak X1kdx72 XJfXdX 68 k0 a a The Normal Equations The n l 1 by n l 1 system of equations is n b b Zak XJdeX X fxdx7 j01n k0 a 3 Some notation let where gx is the complex conjugate of gx everything we do in this class is real so it has no effect This is known as an inner product on the interval 37 b But if you At the minimum we require a 0 which gives us a system of want you can think of it as a notational shorthand for the integral a equations for the coefficients at the normal equations 539 m Least Squares 32 Orthogonal Polynomials 7 929 Least Squares 32 Orthogonal Polynomials 7 1029 The Normal Equations lnner Product Notation I More Notation Defining the Discrete lnner Product In inner product notation our normal equations n b b If we have two vectors a lekdx XijdX 3901 gka a l 1 7 VV07V177VN w w0 W1WN become n k we can define the discrete inner product 2w x gt ltxa x J 01 k0 N 7 Recall the Discrete Normal Equations V7 W T Z V W39 7 i0 n N N Z ak X k Exffh j 0717 m where agaln WIquot ls the complex conjugate of w k0 0 0 Equipped with this notation we revisit the Normal Equations Hmmm looks quite similar m 5353 Least Squares 32 Orthogonal Polynomials 7 1129 Least Squares 32 Orthogonal Polynomials 7 1229 The Normal Equations Inner Product Notation ll Discrete Normal Equations in 2 Notation ilarixllklixlfn 101n i0 k0 i0 Discrete Normal Equations in Inner Product Notation H Zaklxjjkl if fl j01n 170 Continuous Normal Equations in Inner Product Notation H Zarltx xkgt ltx fxgt j 01n k0 Hey It s really the same problem The only thing that changed is the inner product we went from summation to integration Least Squares amp Orthogonal Polynomials The Condition Number of a Matrix The condition number of a matrix is the ratio of the largest eigenvalue and the smallest eigenvalue If A is an n gtlt n matrix and its eigenvalues are lAll l2l lAn then the condition number is lnl l1l cond A The condition number is one important factor determining the growth of the numerical roundoff error in a computation m 7 1329 We can interpret the condition number as a separation of scales If we compute with sixteen digits of precision emach x 10 the best we can expect from our computations even if we do everything right is accuracy condA emach Least Squares amp Orthogonal Polynomials 5351 7 1529 Normal Equations for the Continuous Problem Matrices The bottom line is that the polynomial pX that minimizes 2 px 7 fx dx is given by the solution of the linear system X3 where XiJ ltXi7 Xjgt7 bi ltXi7 Xgt39 ij1 7 aij1 i j 1 A matrix with these entries is known as a Hilbert Matrix classical examples for demonstrating how numerical solutions run into difficulties due to propagation of roundoff errors We can compute xi x1 explicitly m 7 1429 We need some new language and tools Least Squares amp Orthogonal Polynomials The Condition Number for Our Example lu lu Condition Number 3 35 uadraue Best Fit Polynomial Degree Figure Ponder yet again the example of fitting polynomials to the data RIGHT The plot on the left shows the condition numbers for 0th through 4th degree polynomial problems Note that for the 5 by 5 system Hilbert matrix corresponding to the 4th degree problem the condition number is already N 10 Least Squares amp Orthogonal Polynomials 7 1629 Linearly Independent Functions Linearly Independent Functions Polynomials Definition Linearly Independent Functions The set of functions 0X7ltlgt1x7 7 ltlgtX is said to be linearly independent on a7 b if whenever N Z cltlgtX O7 VX 6 a7 b7 10 then cO7 Vi0717 linearly dependent 7 n Otherwise the set is said to be Theorem If jX is a polynomial of degree j then the set ltIgtOX7 ltIgt1X7 7 ltlgtX is linearly independent on any interval a7 b Theorem If jX is a polynomial of degree j then the set ltIgtOX7 ltlgt1X7 7ltlgtnX is linearly independent on any interval a7 b Proof Suppose c E R i 0717 7n and PX 20 cltlgtX O VX 6 a7 b Since PX vanishes on a7 b it must be the zero polynomial ie the coefficients of all the powers of X must be zero In particular the coefficient of X is zero 5 cn 0 hence PX 201C39 ix By repeating the same argument we find c O i 0717 7n 5 0X7ltlgt1x7 7 ltlgtX is linearly independent E E Least Squares amp Orthogonal Polynomials 7 1729 Least Squares amp Orthogonal Polynomials 7 1329 More Definitions and Notation Weight Function Weight Function Inner Product Theorem A weight function will allow us to assign different degrees of lfltlgtoX7 ltlgt1X7 7 ltlgtnX is a collection Oflneary independent importance to different parts of the interval Eg with polynomials in 73 then any pX E 73 can be written uniquey as WX 1i1 7 X2 on 7171 we are assigning more weight away a linear combination ofltlgt0X7 ltlgt1X7 7ltlgtnX from the center of the interval Inner Product with a weight function Definition Weight Function b An integrable function W is called a weight function on the interval ltfX7 gXgtWX fXgX WXdX a7 b if WX 2 O VX 6 a7 b but WX s O on any subintervaI of a a7 b pl m Least Squares amp Orthogonal Polynomials 7 1929 Least Squares amp Orthogonal Polynomials 7 2029 Revisiting Least Squares Approximation with New Notation The Normal Equations Revisited for the nth Time Suppose 0X lt1gt1X lt1gtX is a set of linearly independent functions on a7 b WX a weight function on a7 b and n f E C b X 5 7 l Zoox dawns W dawns l o 1 n We are now looking for the linear combination k0 n pX Zak kx What has changed k0 Xk 7 kX New basis functions which minimizes the sum of squares error lt0 ogt 7 lt0 ogtWX New inner product b E5 pX 7 fX2 WXdx Q Is he ever going to get to the point Why are we 3 doing this When we differentiate with respect to ak WX is a constant so we are g ihg to select the baSiS fUthths kX 50 that the the system of normal equations can be written 993 normal equat39Ohs are easy to some m Least Squares amp Orthogonal Polynomials 7 2129 Least Squares amp Orthogonal Polynomials 7 2229 Orthogonal Functions The Payoff No Matrix lnversion Needed Theorem Ifltlgt0X lt1gt1X dgtX is a set of orthogonal functions on an Definition Orthogonal Set of Functions interval a7 b with respect to the weight function WX then the 0007 D1007 39 t t 7 nX is said to be an orthogonal set of least squares approximation to fX on a7 b with respect to WX ls functions on a7 b with respect to the weight function WX if PX Z ak kX7 k0 0 when i j lt1gt lt1gt 7 lt X7 1XgtWX aiy When 11 Where for each k O7 17 n If in addition a 1 i 01 n the set is said to be 3k lt kX7 Xgtwx orthonormal lt kX7 kXgtwx39 We can find the coefficients without solving XTXE XTb m Where do we get a set of orthogonal functions 535 Least Squares amp Orthogonal Polynomials 7 2329 Least Squares amp Orthogonal Polynomials 7 2429 Building Orthogonal Sets of Functions The Gram Schmidt Process Example Legendre Polynomials 1 of 2 Theorem Gram Schmidt Orthogonalization The set of Legendre Polynomials PnX is orthogonal on 7171 The set of polynomials ltlgt0X7 ltlgt1X7 7ltlgtX defined in the With respect to the Weight function WX 139 following way is orthogonal on a7 b with respect to wX P0X 17 P1X X 7 b1 0 1 Clgt0x 17 Clgt1x X 7 b139 lgt07 where 1 where 71 X dX ltX lgt0X7 0XgtWX b1 17d 1 lt 0X7 0Xgtwx 7 f71 X for k 2 2 ie P1x x kX X bk k71X Ck k72Xr f711X3 dX f711X2 dX where b21 2 1 137 f71 X2 dX f71 1 d b ltX k71Xr qgtk71Xgtwx C ltX k71Xr k72Xgtwx k mam mixlw k mam r72xgtwx 39 5 P200 7 x2 7 13 550 Least Squares 32 Orthogonal Polynomials 7 2529 Least Squares 32 Orthogonal Polynomials 7 2629 Example Legendre Polynomials 2 of 2 Example Laguerre Polynomials The rSt Six Legendre POlynomlals are The set of Laguerre Polynomials LnX is orthogonal on 07 oo POX 1 with respect to the weight function wX e x P1X X L0X 1 P2X X2 7 13 P3X X373X5 b1 ltX71gte x 1 P4X x4 7 6X27 335 lt1 1gte7x P5X X5 7 10X39 5X21 L100 X 7 1 We encountered the Legendre polynomials in the context of numerical integration It turns out that the roots of the Legendre b2 ltXX 7 ill X 7 1gte x 3 C2 ltXX 7 ill 1gte x 1 polynomials are used as the nodes in Gaussian quadrature ltX 17 X 1gte x lt17 1gte x Now we have the machinery to manufacture Legendre polynomials L200 7 x 7 3x 7 1 7 1 7 x2 7 4x 7 2 of any degree m m Least Squares 32 Orthogonal Polynomials 7 2729 Least Squares 32 Orthogonal Polynomials 7 2329 Homework 8 Due Friday 1242009 1234pm Part 1 BF 818 Is this a good prediction model WhyWhy not BF8 1 12 Part 2 BF8 2 1 m Least Squares amp Orthogonal Polynomials 7 2929 Numerical Analysis and Computing Lecture Notes 04 Solutions of Equations in One Variable Interpolation and Polynomial Approximation Accelerating Convergence Zeros of Polynomials Deflation Muller s Method Lagrange Polynomials Neville s Method Peter Blomgren ltblomgrenpeter gmail comgt Department of Mathematics and Statistics D namlcal Systems roup Computational Science Raearch Center San Diego State University San Diego CA 921827720 httpterminussdsuedu 4 Solutions of Equations in One Variable Introduction 539 7 157 Outline o Accelerating Convergence 0 Review 0 Aitken s A2 Method 0 Steffensen s Method 9 Zeros of Polynomials 0 Fundamentals 0 Horner s Method 9 Deflation Miiller s Method 0 Deflation Finding All the Zeros of a Polynomial 0 Miiller s Method Finding Complex Roots a Polynomial Approximation 0 Fundamentals 0 Moving Beyond Taylor Polynomials 0 Lagrange lnterpolating Polynomials 0 Neville s Method 535 157 4 Solutions of Equations in One Variable Recall Convergence of 3 Sequence It is rare to have the luxury of quadratic convergencequot Burden Faires p83 There are a number of methods for squeezing faster convergence out of an already computed sequence of numbers We here explore one method which seems the have been around since the beginning of numerical analysis Aitken s A2 method It can be used to accelerate convergence of a sequence that is linearly convergent regardless of its origin or application A review of modern extrapolation methods can be found in quotPractical Extrapolation Methods Theory and Applicationsquot Avram Sidi Number 10 in Cambridge Monographs on Applied and Compu tational Mathematics Cambridge University Press June 2003 ISBN 0 521 66159 5 4 Solutions of Equations in One Variable 5351 7 357 Definition Suppose the sequence pn 0 converges to p with p i p for all n If positive constants A and a exists with m an1 Pl A new ipn A pia then pn f0 converges to p of order a with asymptotic error constant A Linear convergence means that a 1 and W lt 1 535 457 4 Solutions of Equations in One Variable Aitken s A2 Method lll Assume pn 0 is a linearly convergent sequence with limit p Further assume we are far out into the tail of the sequence n large and the signs of the successive errors agree ie signpn 7 p signpn1 7 p signpn2 7 p and that w m m m A the asymptotic limit pn1 7 p pn 7 p This would indicate Pn1 7 P2 Pn2 7 PPn P P7341 2Pn1l3 P2 Pn2Pn Pn2 PnP P2 Aitken s A2 Method We solve for p and get A little bit of algebraic manipulation put this into the classical Aitken form pnppn Aitken s A2 Method is based on the assumption that the 3 we Pn2 2Pn1 Pn 7 Pn1 Pn2 Pn2 2Pn1 Pn compute from pn2 pn1 and p is a better approximation to the real limit p The analysis needed to prove this is beyond the scope of this class see We solve for p and get 993 eg Sidi39s book 4 Solutions of Equations in One Variable 7 557 4 Solutions of Equations in One Variable 7 657 Aitken s A2 Method The Recipe Aitken s A2 Method Example Consider the sequence p im where the sequence is generated by the fixed oint iteration cos 0 Given a sequence finite pnnN0 or Infinite qn 1 0 sequence p pn pnl p0 A which converges linearly to some limit lterat39on pquot pquot 39 O 0000000000000000 0685073357326045 De ne the new sequences 1 1000000000000000 07 28010361467617 2 0540302305868140 0733665164585231 A pn1 7 pquot2 3 0857553215846393 0736906294340474 Pn Pn 7 7 n 0717 N 7 2 4 0654289790497779 0738050421371664 Pn2 Pn1 PH 5 07 93480358742566 0738636096881655 or 6 07 01368773622757 073 8876582817136 7 07 63959682900654 073 8992243027034 q i q 2 8 07 22102425026708 07390 42511328159 3 q 1 7 7 0717 700 9 07 50417761763761 07390 65949599941 qn2 7 29741 an 10 073 1404042422510 07390 76383318956 11 07 44237354900557 073908 1177259563 12 073 5604740436347 073908 3333909684quot Note Bold digits are correct n needs p13 and g additionally needs p14 4 Solutions of Equations in One Variable 7 757 4 Solutions of Equations in One Variable 7 357 Faster Convergence for Aitken Sequencesquot Theorem Convergence of Aitken AZ Sequences Suppose pn f0 is a sequence that converges lineary to the limit p and for n large enough we have p 7 pp1 7 p gt 0 Then the Aitken acceerated sequence pn io converges fast to p in the sense that ml lqo I700 pa 7 p We can combine Aitken s method with fixed point iteration in order to get a fixed point iteration on steroidsquot 539 i 9 57 4 Solutions of Equations in One Variable Steffensen39s Method The Quadratic g g A Waltz Quadratic Convergence Algorithm Steffensen39s Method lnitial approximation P0 tolerance TOL maximum number of iterations N0 Output Approximate solution p or failure message 1 Set i 1 2 While i S No do 3 6 y tmdmtmdmt P P0 P1 P02P2 2P1 P0 Input 4 lflp7p0l lt TOL then 4a output p 4b stop program 5 Set i i 1 6 Set p0 p 7 Output Failure after N0 iterationsquot R 4 Solutions of Equations in One Variable 7 1157 Steffensen39s Method Fixed Point Iteration on Steroids Suppose we have a fixed point iteration P07 p1 gpo p2 gp1 Once we have p0 p1 and p2 we can compute I30 po 7 P1 P02 p2 7 2P1 P0 At this point we restart the fixed point iteration with p0 p0 eg p3 P07 p4 093 p5 094 and compute A 7 p3 7 m 7 Pa2 P5 2P4 P3 5351 4 Solutions of Equations in One Variable 7 1057 U teffensen s Method Potential Breakage 3 If at some point pg 7 2p1 p0 O which appears in the denominator then we stop and select the current value of p2 as our approximate answer Both Newton39s and Steffensen39s methods give quadratic convergence ln Newton s method we compute one function value and one derivative in each iteration ln Steffensen39s method we have two function evaluations and a more complicated algebraic expression in each iteration but no derivative It looks like we got something for almost nothing However in order the guarantee quadratic convergence for Steffensen39s method the fixed point function g must be 3 times continuously differentiable eg f E C3a b see theorem 214 in Burden Faires Newton39s method quotonlyquot requires f E C2a b BF Theorem 25 m 4 Solutions of Equations in One Variable 7 1257 Zeros of Polynomials Fundamentals Definition Degree of a Polynomial A polynomial of degree n has the form n 71 Theorem The Fundamental Theorem of Algebra PX anx l anTlx l I I I l 3le 807 aquot i 0 If PX is a polynomial of degree n 2 1 With real or complex coef cients then PX O has at least one possibly complex root where the a s are constants either real or complex called the coefficients of P Why look at Polynomials we ll be looking at the PrOblem The proof is surprisingly difficult and requires understanding of PX O l e x O for a Speclal Class of functlons39 complex analysis We leave it as an exercise for the motivated Polynomials are the basis for many approximation methods hence student being able to solve polynomial equations fast is valuable We d like to use Newton s method so we need to compute PX and PX as efficiently as possible 993 m 4 Solutions of Equations in One Variable 7 1357 4 Solutions of Equations in One Variable 7 1457 Key Consequences of the Fundamental Theorem of Algebra 1 of 2 Key Consequences of the Fundamental Theorem of Algebra 2 of 2 Corollary If PX is a polynomial of degree n 2 1 with real or complex coefficients then there exists unique constants X1 X2 Xk Corollary POSSlbly COW Plexl and unique POSlthe integers m139 m2 quotquot mk Let PX and QX be polynomials of degree at most n If X1 X2 SUCh that Zi1 mi n and Xk with k gt n are distinct numbers with PX QX for PX i a X 7 Xlm1X 7 X2m2 I I I X 7 Xkym i 127 k then PX QX for all values of X 7 n The collection of zeros is unique If two polynomials of degree n agree at at least n 1 points then they must be the same m are the multiplicities of the individual zeros A polynomial of degree n has exactly n zeros counting multiplicity nu 5393 4 Solutions of Equations in One Variable 7 1557 4 Solutions of Equations in One Variable 7 1657 Horner39s Method Evaluating Polynomials Quickly 1 of 3 Horner39s Method Evaluating Polynomials Quickly 2 of 3 Let pX 3an aquot71Xn71 81X aO Huh Where did the expression come from Consider If we are looking to evaluate PX0 for any X0 let PX aan a1Xquot 1 a1X a0 anx k1 an71Xn72 a1X ao aan 2 a1Xquot 3 X a1X a0 bnan7 bkakbk1X07 k 17 2771707 then b0 PX0 We have only used n multiplications and n additions axa xXa Xa At the same time we have computed the decomp05ition WW 1 0 n71 bkl PX X 7 X0 QX b07 where quot1 k Horner39s method is Simply the computation of this parenthe5ized QX Z bk1X expression from the inside out k1 m m 4 Solutions of Equations in One Variable 7 1757 4 Solutions of Equations in One Variable 7 1357 Horner39s Method Evaluating Polynomials Quickly 3 of 3 Example1 Horner s Method For PX X4 7 X3 X2 X 7 1 compute P5 Now if we need to compute PXo we have X0 5 a4 1 as 71 82 1 al 1 80 71 b4X0 5 b3X0 20 ngo 105 b1X0 P X X 7 XoQ X 000 QXo b4 1 b3 4 b2 21 b1 106 be 529 xx0 xxo Which we can compute again using Horner39s method in n 7 1 Hence P5 529i and multiplications and n 7 1 additions 3 2 PX X 7 5X 4X 21X 106 529 Proof We really ought to prove that Horner39s method works It basically boils down to lots of algebra which shows that the Similarly we get P5 Q5 436 coefficients of PX and X 7 X0QX b0 are the same X0 5 as 1 32 4 al 21 30 106 A couple of examples may be more instructive b3X0 5 ngo 45 b1X0 330 b31 b29 b166 bo436 5351 4 Solutions of Equations in One Variable 7 1957 4 Solutions of Equations in One Variable 7 2057 Algorithm Horner39s Method FlrSt Version Homework 3 Due 1022009 12 noon Algorithm Horner39s Method Degree n coefficients ao al y PX0 z PXo 1 Setyazan Input i an X0 Output 2 Forjn717n72771 SetyXoyajzony 3 SetyX0ya0 4 Output y7 z 5 End program Part 1 Implement Steffensen39s method in matlab or other language Apply to gX 1 sinX2 p0 1 and compute the first 5 iterates R m 4 Solutions of Equations in One Variable 7 2157 4 Solutions of Equations in One Variable 7 2257 Deflation Finding All the Zeros of a Polynomial Deflation Finding All the Zeros of a Polynomial If we are solving our current favorite problem After some number of iterations of Newton s method we have PX 07 PX a polynomial of degree n 2 2 Q1X x e 2Q2X b3 7 195 o and we are using Horner39s method of computing PX and PXj then after N iterations XN is an approximation to one of the roots f P00 is an nthdegree Poynomia with n rea roots we can appy 0f PX 0 this procedure n 7 2 times to find n 7 2 approximate zeros of We have PX h E R172 and a quadratic factor Qn2x PX X XNQX boy 190 m 0 At this point we can solve Qn2x 0 using the quadratic Let h XN be the first root and Q1X QX formUla39 and We have n was Of PX 039 We can now find a second root by applying Newton s method to This Procedure is called Deflatlon39 Q1X m 5393 4 Solutions of Equations in One Variable 7 2357 4 Solutions of Equations in One Variable 7 2457 Quality of Deflation Improving the Accuracy of Deflation Now the big question is are the approximate roots f1 f2 E good approximations of the roots of PX Unfortunately sometimes no In each step we solve the equation to some tolerance ie i193 lt to Even though we may solve to a tight tolerance 10 8 the errors accumulate and the inaccuracies increase iteration by iteration Question Is deflation therefore useless 4 Solutions of Equations in One Variable m 7 2557 The problem with deflation is that the zeros of QkX are not good representatives of the zeros of PX especially for high k39s As k increases the quality of the root decreases Maybe there is a way to get all the zeros with the same quality The idea is quite simple in each step of deflation instead ofjust accepting k as a root of PX we re run Newton s method on the full polynomial PX with k as the starting point a couple of Newton iterations should quickly converge to the root of the full polynomial m 4 Solutions of Equations in One Variable 7 2657 Improved Deflation Algorithm Outline Deflation amp Improvement Wilkinson Polynomials Algorithm Outline Improved Deflation 1 Apply Newton s method to PX 7 f1 Q1x 2 For k23n72 do 3 4 3 Apply Newton s method to Qk1 a f Q x 4 Apply Newton s method to Px with 7 Apply Horner39s method to Qk1x with x fk fk f as the initial point a QkX 5 Use the quadratic formula on Qn2x to get the two remaining roots Note quotInsidequot Newton s method the evaluations of polynomi als and their derivatives are also performed using Horner39s method 4 Solutions of Equations in One Variable 5351 7 2757 The Wilkinson Polynomials I39l PWX HX k k1 have the roots 127 n but provide surprisingly tough numerical root finding problems Additional details in Math 543 In the next few slides we show the results of Deflation and Improved Deflation applied to Wilkinson polynomials of degree 9 10 12 and 13 5353 4 Solutions of Equations in One Variable 7 2857 Deflation amp Improvement PgVX and PllgX Deflation amp Improvement Pll X and PllgX Relative Error of the Computed Roots i Re39a ive Emquot h 39quotP d R S Re39a ive Emquot h 39quotP d R S Relative Error ortne Computed Roots Figure LEFT The result of the two algorithms on the Wilkinson polynomial of degree Figure LEFT The result of the two algorithms on the Wilkinson polynomial of degree 9 in this case the roots are computed so that bok lt 1076 RIGHT The result of the two algorithms on the Wilkinson polynomial of degree 10 in this case the roots are computed so that bgkl lt 10quot In both cases the lower line corresponds to improved 12 in this case the roots are computed so that bgkl lt 1074 RIGHT The result of the two algorithms on the Wilkinson polynomial ofdegree 13 in this case the roots are computed so that bgkl lt 10 3 In both cases the lower line corresponds to improved deflation and we see that we get an improvement in the relative error of several orderm deflation and we see that we get an improvement in the relative error of several orderm of magnitude of magnitude 4 Solutions of Equations in One Variable 7 2957 4 Solutions of Equations in One Variable 7 3057 What About Complex Roots ML39iller s Method One interesting annoying feature of polynomials With real Miiller s method is an extension of the Secant method coef c39entzs 395 that they may haw omplex rOOts e g Recall that the secant method uses two points Xk and Xk1 and X T 1 has the roots 7 7 I39 Where by defmltlon the function values in those two points 1Xk and fXk1 The I V Tl zero crossing of the linear interpolant the secant line is used as If the initial approximation given to Newton s method is real all the next iterate Xk1 the successwe lterates Will be real which means we Will not find M llerls method takes the next logical step it uses three points complex roots39 Xk Xk1 and X192 the function values in those points 1Xk One way to overcome this is to start with a complex initial 1st1 and 1Xk72 a second degree polynomial fitting these approximation and do all the computations in complex arithmetic three POints is found and its zerocrossng is the neXt iterate Xk1 Another solution is Miiller s Method Next slide fX X4 7 3X3 7 1 Xkez 15 Xkei 25 Xk 35 m 5393 4 Solutions of Equations in One Variable 7 3157 4 Solutions of Equations in One Variable 7 3257 Miiller s Method Illustration fX X4 7 3X3 7 1 ML39iller s Method Fitting the Quadratic Polynomial We consider the quadratic polynomial mX aX 7 Xk2 bX 7 Xk c at the three fitting points we get Xk72 3Xk72 7 X02 bXlt72 7 Xk C 7Xk71 3Xk71 7 X02 9in1 7 Xk c 1Xk c We can solve for a b and c Xk71 7 Xkfxk72 7 fXk 7 X192 7 Xkfxk71 7 fXk Xk72 7 XkXk71 7 XkXIlt72 7 Xk71 a b X192 7 Xk2fxk71 7 fXk 7 X191 7 Xk2fxk72 7 fXk Xk72 7 XkXk71 7 XkXk72 7 Xk71 C fXk 0 1 2 3 4 m m 4 Solutions of Equations in One Variable 7 3357 4 Solutions of Equations in One Variable 7 3457 Miiller s Method Identifying the Zero ML39iller s Method Algorithm AI 39th Mquot 39 Mth d We now have a quadratic equation for X 7Xk which gives us two go m U ers e O I I Possibilities for Xk1 Input X0 X1 X2 tolerance to max Iterations N0 Output Approximate solution p or failure message Xk 1 iXk QC 1 Set hl X1 7 X0 hz X2 7 X1 61 fx1 7 fx0h1 l bixb2 74ac 52 lflle flxlllhzv d52 51h2h1v13 2 Whilej Ill0 do 3 7 ln Muller 5 method we select 3 b62h2d D b274fX2d complex Xk1Xk7 2C 4 Ifib7DiltibDithensetEbDelsesetEb7D bsignbvb2 74ac 5 Set h72fxzE pX2h 6 If ihi lt fol then output p stop program we are maximizing the absolute size of the denominator hence 7 Set X0 X1 X1 X2 X2 p hl X1 7 X0 we select the root closest to Xk hz X2 7 X1 61 fX1 7 XOHhL 62 02 7 01th Note that if b2 7 4ac lt 0 then we automatically get complex roots d 62 7 llhz l ml 1 1 l 1 8 output Muller39s Method failed after N0 iterations 5 4 Solutions of Equations in One Variable 7 3557 4 Solutions of Equations in One Variable 7 3657 Now We Know quotEverythingquot About Solving fX 0 Recap continued Let s recap Things to remember The relation between root finding fX O and fixed point gX X Key algorithms for root finding Bisection Secant Method and Newton s Method Know what they are the updates how to start one or two points bracketing or not bracketing the root can the method break can breakage be fixed Convergence properties Also know the mechanics of the Regula Falsi method and understand why it can run into trouble Fixed point iteration Under what conditions do FP iteration converge for all starting values in the interval Basic error analysis order a asymptotic error constant A Which one has the most impact on convergence Convergence rate for general fixed point iterations Multiplicity of zeros What does it mean How do we use this knowledge to quothelpquot Newton39s method when we39re looking for a zero of high multiplicity Convergence acceleration Aitken s AZ method Steffensen39s Method Zeros of polynomials Horner39s method Deflation with improvement Muller s method 539 m 4 Solutions of Equations in One Variable 7 3757 4 Solutions of Equations in One Variable 7 3357 Homework 3 Due 1022009 12 noon Final Version New Favorite Problem Interpolation and Polynomial Approximation Part 1 lmplement Steffensen39s method in matlab or other language Apply to gX 1 sinX2 p0 1 and compute the first 5 iterates Part 2 lmplement ML39iller s method in matlab or other language Use to solve BF267bce but go up to accuracy 10 8 Turn in your code for ML39iller s method and the iteration tables Xk7 fXk 0 for all methods nl 5353 4 Solutions of Equations in One Variable 7 3957 4 Solutions of Equations in One Variable 7 4057 Weierstrass Approximation Theorem Illustrated Weierstrass Approximation Theorem The following theorem is the basis for polynomial approximation Theorem Weierstrass Approximation Theorem Suppose f E Ca7 b Then V5 gt O 3 a polynomial PX lfX 7 PXl lt e VX 6 a7 b Note The bound is uniform i39e valid for all X in the interval Note The theorem says nothing about how to find the polyno mial or about its order 0 2 4 6 8 10 Figure Weierstrass approximation Theorem guarantees that we maybe with sub stantial work can find a polynomial which fits into the tube around the function 1 no matter how thin we make the tube 539 m 4 Solutions of Equations in One Variable 7 4157 4 Solutions of Equations in One Variable 7 4257 Candidates the Taylor Polynomials Taylor Approximation of ex on the lnterval O7 3 Natural Question Are our old friends the Taylor Polynomials good candidates for polynomial interpolation Answer No The Taylor expansion works very hard to be accurate in the neighborhood of one point But we want to fit data at many points in an extended interval Next slide The approximation is great near the expansion point X0 0 but get progressively worse at we get further away from the point even for the higher degree approximations n u 5393 4 Solutions of Equations in One Variable 7 4357 4 Solutions of Equations in One Variable 7 4457 Lookahead Polynomial Approximation Interpolation Lagrange Polynomials Idea Instead of working hard at one point we will prescribe Clearly Taylor polynomials are not well suited for approximating a a number of points through which the polynomial must pass function over an extended interval We are going to look at the following As warm up we will define a function that passes through the o Lagrange polynomials Neville s Method This Lecture P0ints X07 f00 and X17 fX1 FlrStx lets de ne o Newton s divided differences LOX X TX1 7 L1X X T X0 7 X0 X1 X1 X0 Hermite lnterPOlatlon39 and then define the interpolating polynomial 9 Cubic splines Piecewise polynomial approximation PX LoXfXo L1XfX17 o Parametric curves then PXO fXO7 and PX1 X0 0 B zier curves used in eg computer graphics Px is the unique linear polynomial passing through 539 X07 fXOD arid X17fX1 m 4 Solutions of Equations in One Variable 7 4557 4 Solutions of Equations in One Variable 7 4657 An n degree polynomial passing through n 1 points Example of Ln7kx We are going to construct a polynomial passing through the points X07fXoi X17fX1i X27fX2i me fgt90 We define Ln7kx the Lagrange coefficients n X Xi X Xo X inl X Xk1 X an Lquotkx H 7 i0 k Xk Xi Xk Xo Xk inl Xk Xk1 Xk Xn which have the properties LnkXlt 1 LnkXl39 07 Vi Se k 0 1 2 3 4 5 6 This is L673X for the points X i7 139 O7 76 m 5353 4 Solutions of Equations in One Variable 7 4757 4 Solutions of Equations in One Variable 7 4857 The nth Lagrange lnterpolating Polynomial Error bound for the Lagrange interpolating polynomial We use Ln7kX k 07 7n as building blocks for the Lagrange interpolating polynomial 1 PM ZfW mkMr keo which has the property POW fXl7 Vl077n This is the unique polynomial passing through the points XirfXi i 07 7n m 4 Solutions of Equations in One Variable 7 4957 Suppose X7 i O7 7 n are distinct numbers in the interval a7 b and f E C 1a7 b Then VX 6 a7 b 3 X 6 a7 b so that no X n X PLagrangeX 7 X7 where PLagangeX is the n ih Lagrange interpolating polynomial Compare with the error formula for Taylor polynomials fltn1gtlslxll n 1 X Xoln l 7X PTaylorX l Problem Applying the error term may be difficult m 4 Solutions of Equations in One Variable 7 5057 The Lagrange and Taylor Error Terms Practical Problems Just to get a feeling for the non constant part of the error terms in the Lagrange and Taylor approximations we plot those parts on the interval 074 with interpolation points X i i 0717 74 Figure LEFT The nonrconstant error terms for the Lagrange interpolation oscillates in the interval 744 the value zero at the node point xk and RIGHT the nonrconstant error term for the Taylor m extrapolation grows in the Interval 0 1024 4 Solutions of Equations in One Variable 7 5157 Applying estimating the error term is difficult The degree of the polynomial needed for some desired accuracy is not known until after cumbersome calculations checking the error term If we want to increase the degree of the polynomial to eg n 1 the previous calculations are not of any help Building block for a fix Let f be a function defined at X07 7X and suppose that m17 m27 7 mk are k lt n distinct integers with O S m S n Vi The Lagrange polynomial that agrees with fX the k points Xm17Xm27 7ka is denoted Pm17m277mkx Note m17m277mk C 07177n 5393 4 Solutions of Equations in One Variable 7 5257 Increasing the degree of the Lagrange Interpolation Theorem Let f be defined at X07X17 7Xk and X and Xj be two distinct points in this set then PX X XjP0j71j1akX X XiP0i71i1kX Xquot 7 is the kth Lagrange polynomial that interpolates f at the k 1 points X07 7Xk m Recursive Generation of Higher Degree Lagrange Interpolating Polynomials X0 P0 X1 P1 P01 X2 P2 P12 P012 X3 P3 P25 P123 P0123 X4 P4 P34 P234 P1234 P01234 m 4 Solutions of Equations in One Variable 7 5357 4 Solutions of Equations in One Variable 7 5457 Neville s Method Algorithm Neville s Method Iterated Interpolation The notation in the previous table gets cumbersome We introduce the notation QLastYDegree PLashDegregw last the table Algor39thm39 NeV39IIe s MethOd becomes To evaluate the polynomial that interpolates the n 1 points X0 Q09 X7 fX i 07 7 n at the point X X1 Ql QM 1 Initialize Qho fX X2 Q20 Q21 Q22 2 X3 Q30 Q31 Q32 Q33 FOR I 1 n X4 Q40 Q41 Q42 Q43 Q44 FDR j 1 I Compare with the old notation QIj Xi Xi j X0 P0 END X1 P1 P01 END X2 P2 P12 P012 X3 P3 P23 P123 Po123 3 Output the Q table X4 P4 P34 P234 P1234 P01234 Drl 03 4 Solutions of Equations in One Variable 7 5557 4 Solutions of Equations in One Variable 7 5657 Homework 3 Due 1022009 12 noon Final Version Part 1 Implement Steffensen39s method in matlab or other language Apply to gX 1 sinX2 p0 1 and compute the first 5 iterates Part 2 Implement Muller s method in matlab or other language Use to solve BF267bce but go up to accuracy 10 8 Turn in your code for Muller s method and the iteration tables Xk7 fXk 0 for all methods HW 4 Part 1 Homework 4 Due 1092009 lmplement Neville s method 99 4 Solutions of Equations in One Variable 7 5757 Numerical Analysis and Computing Lecture Notes 07 Numerical Differentiation and Integration Differentiation Richardson s Extrapolation Integration Peter Blomgren ltblomgrenpeter gmail com Department of Mathematics and Statistics Dynamical Systems roup Computational Science Raearch Center San Diego State University San Diego CA 921827720 httpterminussdsuedu Outline 9 Numerical Differentiation 0 Ideas and Fundamental Tools 0 Moving Along a Richardson39s Extrapolation 0 A Nice Piece of Algebra Magic 0 Homework 6 Preliminary Version 9 Numerical Integration Quadrature 0 The Why and Introduction 0 Trapezoidal amp Simpson39s Rules 0 Newton Cotes Formulas O Homework 6 Final Version Richardson39s Extrapolation f fgtltdgtlt 7 151 Richardson39s Extrapolation f fgtltdgtlt 7 251 Numerical Differentiation The Big Picture Numerical Differentiation Definition Derivative as a limit The goal of numerical differentiation is to compute an accurate The derlvatlve Of f at X0 395 approximation to the derivatives of a function XO h XO fXo imof A Given measurements 0 of the underlying function fX at the node values Xi0 our task is to estimate fx and later higher derivatives in the same nodes The obvious approximation is to fix h small and compute The strategy Fit a polynomial to a cleverly selected subset of the fXO m M nodes and use the derivative of that polynomial as h the approximation Of the derivative Problems Cancellation and roundoff errors For small values Richardson39s Extrapolation ffxdx m 351 of h fX0 h m fX0 so the difference may have very few significant digits in finite precision arithmetic gt smaller h not necessarily better numerically 535 7451 2 Richardson39s Extrapolation f fgtltdgtlt Main Tools for Numerical Differentiation lof2 Main Tools for Numerical Differentiation 2 of 2 In the discussion on Numerical Differentiation and later Integration we will rely on our old friend nemesis the Taylor expansions Theorem Taylor s Theorem Suppose f E Cquota7 b f 13 on a7 b and X0 6 a7 b Then VX 6 a7 b 35X E minX0X7 maxX0X with f X PnX RnX where quot k quot1 Px if X x 7 w Rnx f n g x 7 X0quot1 PX is the Taylor polynomial of degree n and RX is the remainder term truncation error Our second tool for building Differentiation and Integration schemes are the Lagrange Coefficients Recall Ln7kX is the nth degree polynomial which is 1 in Xk and O in the other nodes Xj7 j i k Previously we have used the family Ln70X Ln71X Ln7X to build the Lagrange interpolating polynomial A good tool for discussing polynomial behavior but not necessarily for computing polynomial values cf Newton39s divided differences Now lets combine our tools and look at differentiation AR m Richardson39s Extrapolation f fxdgtlt 7 551 Richardson39s Extrapolation f fgtltdgtlt 7 651 Getting an Error Estimate Taylor Expansion Using Higher Degree Polynomials to get Better Accuracy Suppose X0X17 7X are distinct points in an interval I and f E C 1I we can write f X h 7 f X W fXo hfXo hgf x fxo n H X Xk k 7 fXo gf 5x fx k axonx W wkax 0 t 1 If f X is bounded le Lagrange Interpi Poly Error Term f X lt M7 V500 E XOVXO h Formal differentiation of this expre55lon glves h h f f L d Hid 9 fltn1gtlsl X X X 7 X t en we ave 70 k quot k dX n 1 fX0h7fX0 Mlhl T m f X0 N f With an error less than HZOX 7Xk i fn1EX 39 n 1 dX Thls 395 the apprOleatlon error39 Note When we evaluate f ltj at the node points the last term Roundoff error emach x 10 not taken into account m gives no contribution 2 we don39t have to worry about it Richardson39s Extrapolation f fxdgtlt 7 751 2 Richardson39s Extrapolation f fxdgtlt 7 351 Exercising the Product Rule for Differentiation a l iriing 1 n 1 1 n n W H Hr 10 kok j Now if we let X X for some particular value of 6 only the product which skips that value ofj E is non zero eg 1 n n W H Hr 10 kok j Example 3 point Formulas llll Building blocks X 7 MW 7 X2 L2ox 7 my 7 W L21X 7 X1 7 x0X1 X27 i w L22X X2 7 X0X2 X07 Formulas fXj too i fX2 X7X1X7X2X7Xnx7xox7xzX7Xnml 1 I1 x Homoq 7 Xk 539 Richardson39s Extrapolation ffxdx 2X1 7 X1 7 X2 X0 X1X0 7 X2 2X1 7 X0 7 X1 X2 7 X0X2 7 X1 Richardson39s Extrapolation ffxdx The n 1 point formula for approximating fXj Putting it all together yields what is known as the n 1 point formula for approximating fXj2 WW5 f X39 f X L X39 7 X 7X J g k nk J n J k J Note The formula is most useful when the node points are equally spaced it can be computed once and stored ie Xk X0 kh Now we have to compute the derivatives of the Lagrange coefficients ie LnykX We can no longer dodge this task m Richardson39s Extrapolation f fgtltdgtlt 7 1051 Example 3 point Formulas lllll When the points are equally spaced 1 h2 f xo 73fxo W1 7 fX2 gram 1 h2 3 f X1 7 7fXo fX2l 7 gf 51 1 h2 3 f ltle fxo 7 W1 3fX2 f 52 Use X0 as the reference point Xk X0 l kh 1 h2 f xo 73fxo 4fxo h a We 2h Ftwo 1 h2 r x0 h E 400 fx0 2h 7 Erma 1 h2 r x0 2h fx0 7 4fx0 h 3fx0 2h Fl 352 2g Richardson39s Extrapolation f fgtltdgtlt 7 1251 Example 3 point Formulas IIIIII 1 h2 f x0 73fx0 4fx0 h 7 fX0 2h 714350 96 1 hz 3 f X0 E I flxo h quot0 II ff 51 1 I72 3 f x0 7 g m 7 2h 7 X0 7 h 3m l at 52 After the substitution X0 7l7 h 7 X3 in the second equation and X0 7l7 2h 7 X in the third equation Note1 The third equation can be obtained from the first one by setting h 7gt 7h Note2 The error is smallest in the second equation Note3 The second equation is a twosided approximation the first and third one si ed approximations Note4 We can drop the superscripts f m Richardson39s Extrapolation ffxdx 7 1351 3 point Formulas Illustration Forward Formula 3 point Formulas Illustration Centered Formula fxo 7 7fxo 7 h x0 h 7 2f351 m Richardson39s Extrapolation f fgtltdgtlt 7 1451 3 point Formulas Illustration Backward Formula 2 fxo 7 217h73fxo 4fx0 h 7 x0 2h gram 2 Richardson39s Extrapolation ffxdx 7 1551 fxo 7 2717 fx0 7 2h 7 4fxo 7 h 3fxo hgfmaz gas 7 Richardson39s Extrapolation f fgtltdgtlt 7 1651 5 point Formulas 5 point Formulas Illustration Centered Formula If we want even better approximations we can go to 4 point 5 point 6 point etc formulas The most accurate smallest error term 5 point formula is fXO W f55 Sometimes eg for end point approximations like the clamped splines we need one sided formulas fXO 725fx048fx0h736fxi12h16fx03h73fx04h i 54 1435 fXO W f55 539 Richardson39s Extrapolation f fgtltdgtlt 7 1751 Richardson39s Extrapolation f fgtltdgtlt 7 1351 Higher Order Derivatives Wrapping Up Numerical Differentiation We now have the tools to build high order accurate approximations We can derive approximations for higher order derivatives in the t0 the derivative same way Fit a kth degree polynomial to a cluster of points Xi7 fXrilIlt1i and compute the appropriate derivative of the We Will use these tools and Similar techniques in building Polynomial in the POint of interest integration schemes in the followmg lectures The standard centered approximation of the second derivative is Also these aPPrOleatlons are the baCkbone Of nite difference given by methods for numerical solution of differential equations see Math 542 and Math 693b f X h 7 2f X f X 7 h fX0 W Oh2 Next we develop a general tool for combining low order accurate approximations to derivatives integrals anything almost in order to hierarchically constructing higher order approximations m 5393 2 Richardson39s Extrapolation f fgtltdgtlt 7 1951 25 Richardson39s Extrapolation f fgtltdgtlt 7 2051 Richardson s Extrapolation Building High Accuracy Approximations lV What it is A general method for generating high accuracy results using low order formulas Applicable when The approximation technique has an error term of predictable form eg 00 M 7 VJh Z Ekhk k7 where M is the unknown value we are trying to approximate and the approximation which has an error Ohj Consider two first order approximations to M M 7 N1h Z Ekhquot k1 and w hk M 7 N1h2 Z Ek27k k1 If we let N2h 2N1h2 7 N1h then H M 7 N2h 7 2E 7 E1hZEEhk k2 Procedure Use two approximations of the same order but with 0 different h eg Njh and Combine the two Where approximations in such a way that the error terms of ES Ek lt klil 7 1gt order h cancel 2 99 Hence N2h is now a second order approximation to M m Richardson39s Extrapolation f fgtltdgtlt 7 2151 Richardson39s Extrapolation f fgtltdgtlt 7 2251 Building High Accuracy Approximations llV Building High Accuracy Approximations IllV We can play the game again and combine N2h with N2h2 to get a Let s derive the general update formula Given third order accurate approximation etc M 7 NJh Ejhi 0 W1 4Nh27Nh Nh27Nh N3h42 2 N2h2 2 3 2 h hm MMh2 LilTm N h 2 7 N h N40 N3h2 M We let mi 7 Ami2 Nj1h aJNJh two2 In geilertal Zombining 39twot39jth order approximations to get a However if we want Nj1h to approximate My we must have J s or er apprOXIma ion aj j 1 Therefore Nhz A Nlh Nj1h lehZ J h 2171 M7N h Eh 17 E7 Cir1 l m 11 0 11 0 1 121 l m Richardson39s Extrapolation f fgtltdgtlt 7 2351 2 Richardson39s Extrapolation f fgtltdgtlt 7 2451 Building High Accuracy Approximations lVV Now 1 M 7 Vj1h Ejh la 1 7 04 0W We want to select 09 so that the expression in the bracket is zero This gives 71 2139 217 1 1 1 WE l ai W12J71 Therefore Nj1h two2 Building High Accuracy Approximations VV The following table illustrates how we can use Richardson39s extrapolation to build a 5th order approximation using five lst order approximations 0h 0012 0013 0014 9015 N107 N1h2 N207 N1h4 N2h2 N3h N1h8 N2h4 N3h2 N407 N1h16 N2h8 N3h4 N4h2 N507 T Measurements T Extrapolations T 539 53539 Richardson39s Extrapolation f fxdgtlt 7 2551 Richardson39s Extrapolation f fgtltdgtlt 7 2651 Example cf slide13 and slide17 Example fX XZGX The centered difference formula approximating f x0 can be expressed fx h 7 fx7 h h2 fx 2xX2eX f Xo 7 if 5 0h4 i 2 7 2h 6 X 0 f 2 7 8e 7 59112 N2h error term 1 7O 15 8197 f I I w i 64 566 Fwd Difference 2 7 Pt In order to eliminate the h2 part of the error we let our new 13980 196009 0 1 aPPlemath be 13990 24391361 w 59384 Ctr Difference 3pt N h 2 N h 200 295562 0 2 N3h N2h2 210 360128 60201 Ctr Difference 220 436811 f h f h fxh7fx7h7fx2h7fx72h 230 521634 459384 602013 59111 Richardson 7 X i X 2n 4n N32h 7 2n 3 r1s7sr19sr217r22 59 111 5Pt 7 sfxh7sfx7h 7 fx2h7fx72h 12 39 39 6h 6h 1hfx 7 2h 7 8fx 7 h 8fx h 7 fx 2h 93 535 Richardson39s Extrapolation f fxdgtlt 7 2751 2 Richardson39s Extrapolation f fxdgtlt 7 2351 Wrap up Homework 6 Due Friday 1162009 12 noon We are going to use Richardson extrapolation in combination with some of the simpler integration schemes we will develop in order to generate general schemes for numerically computing integrals up to high order Note In order to use Richardson extrapolation we must know the form of the error hence finding error terms in our approximations turns out to have a very practical use Integration lntroduction The quotWhyquot After taking calculus I thought I could differentiate andor integrate every function Then came physics mechanical engineering etc The need for numerical integration was painfully obvious Sometimes most of the time the anti derivative is not available in closed form Part 1 x dX FX c BF 415 Iv I BF4 1 AntkDexlvatlve BF4 2 9 539 53539 Richardson39s Extrapolation f fxdgtlt 7 2951 Richardson39s Extrapolation f fgtltdgtlt 7 3051 Numerical Quadrature Building Integration Schemes with Lagrange Polynomials Given the nodes X0X1 Xn we can use the Lagrange The basic idea is to replace integration by clever summation interpolating polynomial b i quot fltn1gts 1 quot fX dX 7 af P X X fL 39X With error E X xix a 0 n I nl 7 n n I l0 l0 where a 3 X0 lt X1 lt lt X S b f fX to Obtain b b b H The coefficients a and the nodes X are to be selected fx dx PX dX EX dx 2 a a hV l The Approximation The Error Estimate 5331 535 Richardson39s Extrapolation f fxdgtlt 7 3151 2 Richardson39s Extrapolation f fxdgtlt 7 3251 Identifying the Coefficients b b n n b n PX dX Z LnHX dX Z 25 Lnx dX 2 ag a a 0 0 a 70 3i Hence we write b n fX dx x Zam a 0 with error given by b b n1 X n Ef EXdx gvix dx Example 1 Trapezoidal Rule llll Let a X0 lt X1 b and use the linear interpolating polynomial P1XfbiX1f1X7XO 7X1 X1 X0 Note Can we change the order of integration f and summation E as we did above In this case where we are integrating a polynomial over a finite interval it i OK For technical details see a class on real analysis eg Math 5348 m m Richardson39s Extrapolation ffxdx 7 3351 Richardson39s Extrapolation f fxdx 7 3451 Example 1 Trapezoidal Rule lllll Example 1 Trapezoidal Rule llllll Thequot Hence b X1 X 7 X1 X 7 X0 b 2 2 X1 3 wa fo X X X X dx poo fol wax loll M H finE a X3 X1 0 1 1 0 a 2X0 7 X1 2X1 7 X0 X0 12 7 f sxx a xoxx 7 x1 dx X 7 X h3 2 Xe M lfo a 7 4W5 2 12 The error term use the Weighted Mean Value Theorem x1 X1 b 3 f f h f sxx a xoxx 7 x1 dx We x a xoxx 7x1 dx ax dx h M 7 7W5 h b a a x0 x0 a 2 12 3 X1 3 7w Xi mxz X0X1X2 iifK il 3 2 X0 6 where hX1 7X0 bia quot quot S353 Richardson39s Extrapolation ffxdx 7 3551 Richardson39s Extrapolation f fxdx 73651 Example Za Simpson s Rule suboptimal error bound Let X0 a X1 7 X2 b let h and use the quadratic interpolating polynomial X 7 MW 7 X2 X 7 XoX 7 X2 b X2 W x lflxolm i X fag X X0 X1 dX X2 7 X0X2 X1 ngg ig g hmgvam WuwxhF i g i qomnman Example Qb Simpson s Rule optimal error bound The optimal error bound for Simpson s rule can be obtained by Taylor expanding fX about the mid point X1 X W X A X M xii mm 7 x1 7 2 1x 7 x1 7 5 7 x03 if 5 x 7 x04 Then formally integrating this expression fgtlt1X 7 X02 fHX1 3 x x 2 6 1 4 X fab lfx1fx1x7x1 7 5 Mimi ax After use of the weighted mean value theorem and thezthe approximation f X1 h712fxo 7 2fX1 fX2 7 f4 and a whole lot of algebra see BF pp 189 190 we end up with X2 X X fX04fX1fX2 7L5 X0fd h 3 90f 5 99 Richardson39s Extrapolation f fgtltdgtlt 7 3751 Richardson39s Extrapolation f fgtltdgtlt 7 3351 Example 2 Simpson s Rule Integration Examples b X0 4fX1 fX2 5 4 dX h l fX ab 7 fxdx Trapezoidal Error Simpson Error a x 01 12 0 05 0 x2 01 13 05 016667 033333 0 x3 01 14 05 025000 025000 0 x4 01 15 05 030000 020833 00083333 ex 01 6 7 1 18591 014086 17189 00005793 x XVSimpsm sRule The Trapezoidal rule gives exact solutions for linear functions The error terms contains a second derivative Simpson s rule gives exact solutions for polynomials of degree less than 4 The error term contains a fourth derivative 5331 535 Richardson39s Extrapolation f fgtltdgtlt 7 3951 f X Richardson39s Extrapolation f fgtltdgtlt 7 4051 Degree of Accuracy Precision Definition Degree of Accuracy The Degree of Accuracy or precision of a quadrature formula is the largest positive integer n such that the formula is exact for Xk k01n Newton Cotes Formulas Two Types Closed The n 1 point closed NCF uses nodes X X0 1h 139 01 7n whereXO a X band h bian It is called closed since the endpoints are included as nodes Closed p With this definition I Scheme Degree of Accuracy Trapezoidal 1 Simpson s 3 Trapezoidal and Simpson s are examples of a class of methods l i7 i l 7 Xll Ii 392 39n l le 7 39 known as Newton Cotes formulas m 53539 Richardson39s Extrapolation f fgtltdgtlt 7 4151 Richardson39s Extrapolation f fgtltdgtlt 7 4251 Newton Cotes Formulas Two Types Open Closed Newton Cotes Formulas Open The n 1 point open NCF uses nodes X X0 ih iO1nwhere hbian2 and X0 ah Xn b 7 h We label X11 a Xn1 b The approximation is b n fX dxm Zai xi a 0 where n Xquot Xquot X 7 X a L 739XdX ijdx I X quotI x w 1 Note The Lagrange polynomial Ln739X models a function which takes the value 0 at all Xj j i i and 1 at X Hence the l l l u39i Nu xi x2 In xiib x coef CIent a captures the Integral of a function which is 1 nu in X and zero in the other node points 535 Richardson39s Extrapolation f fgtltdgtlt 7 4351 2 Richardson39s Extrapolation f fgtltdgtlt 7 4451 Closed Newton Cotes Formulas Error Theorem Suppose that 20 afX denotes the n l 1 point closed Newton Cotes formula With X0 a Xquot b and h b 7 an Then there exists 5 6 a7 b for which 17 7 quot hn3fn25 A fxdx 7 afx W quott2t7 1t7 ndt 0 ifn is even and f E Cquot2a7 b and hn2fn15 b n A fxdx gamer H 1 quottt71t7ndt 0 Closed Newton Cotes Formulas Examples n 2 Simpson s Rule h h5 4 7 fXo 4fX1 fX2 if 5 3 90 n 3 Simpson s g Rule 3h 375 g fxo 3fx1 3fx2 fXs a Erma n 4 Boole s Rule ifn is odd and f 6 CW1 a b 2h 8h7 l l E mm 7 32 7 12mg 7 32fX3 7 7fX4 7 ning N t th t h 39 39 t th d f 39 39 39 1 o e a w en n Is an even ln eger e egree o preCI5lon ls n When n ls odd the degree of preCI5lon ls only n Richardson39s Extrapolation f fgtltdgtlt 7 4551 Richardson39s Extrapolation f fgtltdgtlt 7 4651 Open Newton Cotes Formulas Open Newton Cotes Formulas Error Theorem Su ose that Z afX denotes the n 1 point open Newton Cotes Th t pp 0 e approx39ma Ion ls formula With X71 a Xn1 b and h b 7 an l 2 Then there b Xn n existsE 6 a7 b for Which a x dx 7 X x dX m Zarnxr W wm 2 1 0 fxdx 2am n 2 I t t7 1 t7 ndt where ifn is even and f E Cquot2a7 b and Xn1 xquot 7 X 7 X a Ln739X dX 7 dx 7 7 quot hquot2fquot15 quot1 X71 X0 111 XI 7 X1 fxdx7 ganxw W 71 tt 7 1 t 7 ndt j 7b i ifn is odd and f E Cr39l1a7 b w Note that when n is an even integer the degree of precision is n l 1 h I When n is odd the degree of precision is only n m Richardson39s Extrapolation f fgtltdgtlt 7 4751 2 Richardson39s Extrapolation f fgtltdgtlt 7 4351 Open Newton Cotes Formulas Examples Divide and Conquer Say you want to compute hs 100 n 0 2hfx0 3l g 0 f X dX Is it a Good ldeaTM to directly apply your favorite Newton Cotes 3h 3h3 formula to this integral 1 7 f f if n 2 l x0 ml 4 s No 5 With the closed 5 point NCF we have h 25 and h590 105 so n 2 2fx0 7 fx1 2fX2 414h 744 even with a bound on f6 the error will be large 3 45 Better Apply the closed 5 point NCF to the integrals 4i1 5h 9575 n 3 1mm fX1 xz 11mg rms fx dxy I 0717 724 I m then sum Composite Numerical Integration next time m Richardson39s Extrapolation f fgtltdgtlt 7 4951 Richardson39s Extrapolation f fgtltdgtlt 7 5051 Homework 6 Due Friday 1162009 12 noon Part 1 BF4 1 5 BF4 1 27 BF4 2 9 Part 2 BF 431 ab BF 435 ab m 2 Richardson39s Extrapolation ffxdx 7 5151 View quot1 mi if Peter Blomgren blomgren peter gmail com Department of Mathematics and Statistics ystems Group Computationai Sciences Research Center San Diego State University San Diego CA 9218277720 Dynamica s httpterminussc39 uedu Fall 2009 w My man 2m mum mm Peter Blom Outline 0 Approximation Theory Discrete Least Squares 0 Introduction 0 Discrete Least Squares 0 Discrete Least Squares 0 A Simple Powerful Approach 9 Discrete Least Squares 0 Application Cricket Thermometer Peter Blon ren rele Least Squa Approximation Theory Discrete Least Squares DisI Leash ares Sometimes we get a lot of data many observations and want to e Introduction Introduction Matching a Few Parameters to a Lot of Data fit it to a simple mod 6 47 27 39 0 Approximation Approximation Theory Discrete Least Squares Introduction Disci Leash ares Introduction Matching a Few Parameters to a Lot of Data Sometimes we get a lot of data many observations and want to fit it to a simple model i i i i 6 no a Approximation Approximation Theory Discrete Least Squares Introduction Disci Leash ares Introduction Matching a Few Parameters to a Lot of Data Sometimes we get a lot of data many observations and want to fit it to a simple model Approximation Approximation Theory Discrete Least Squares Introduction Disci Leash ares Introduction Matching a Few Parameters to a Lot of Data Sometimes we get a lot of data many observations and want to fit it to a simple model 6 n 2 o u 0 i i i i i i i i i 1 2 3 4 Measured Data Quadratic BestFiL Approximation Approximation Theory Discrete Least Squares Introduction DisI Leash ares Introduction Matching a Few Parameters to a Lot of Data Sometimes we get a lot of data many observations and want to fit it to a simple model 6 n Approximation Approximation Theory Discrete Least Squares Introduction DisI Leash ares Introduction Matching a Few Parameters to a Lot of Data Sometimes we get a lot of data many observations and want to fit it to a simple model 6 Undalymg function fx 1 x xwzs Measured Data Average LinearBesLFii Quadratic BestFiL PDFriink co e Approximation IS Approximation Theory Dis Le introduction Why a Low Dimensional Model Low dimensional models eg low degree polynomials are easy to work with and are quite well behaved high degree polynomials can be quite oscillatory All measurements are noisy to some degree Often we want to use a large number of measurements in order to average out random noise Approximation Theory looks at two problems 1 Given a data set find the best fit for a model Le in a class of functions find the one that best represents the data 2 Find a simpler model approximating a given function Peter Blomgren Discrete Least Squares Approximation Approximation Theory rele Least Squares Introduction 25 Leash 3995 l ir i Lax nan D Interpolation A Bad Idea We can probably agree that trying to interpolate this data set l i l i l i l 6 39 I u quot39 l 39 a 39u u 4 o I 39I o 27 39o 39 7 39 39 0 i l i l i l i l i 1 2 3 4 with a 50th degree polynomial is not the best idea in the world Even fitting a cubic spline to this data gives wild oscillations I tried and it was not pretty Discrete Least Squares Approxim Approximation Theory Discrete Least Squares Introduction Di Leash ares Defining Best Fit the Residual We are going to relax the requirement that the approximating function must pass through all the data points Now we need a measurement of how well our approximation fits the data A definition of best fit Approximation Approximation Theory Dis Le introduction IS Defining Best Fit the Residual We are going to relax the requirement that the approximating function must pass through all the data points Now we need a measurement of how well our approximation fits the data A definition of best fit If fX are the measured function values and aX are the values of our approximating functions we can define a function rX fX 7 aX which measures the deviation residual at X Notice that F rx07 rx17 rxnT is a vector Notation From now on f fx 3 aX and r rX Further f f0 fnT a 30317anT and F r07 r17 rnT Peter Blomgren Discrete Least Squares Approximation Approximation Theory Discrete Least Squares Introduction DisI Leash ares i L What is the Size of the Residual There are many possible choices 5g 0 The abssum of the deviations gt El iiFiil iiFiiz o The largest of the deviations E00 giagnirii 42gt E00 iiriioo In most cases the sum of the squares version is the easiest to work with From now on we will focus on this choice I Approximation Approximation Theory Discrete Least Squares Disci Leash ares i Discrete Least Squares Discrete Least Squares Approximation We have chosen the sum of squares measurement for errors Lets find the constant that best fits the data minimize n EC Zh a c i0 if C is a minimizer then EC 0 derivative at a maxmin is zero Approximation Approximation Theory Di Di ele Least Squares Lea pares i r Dis 919 Least Squares Discrete Least Squares Approximation We have chosen the sum of squares measurement for errors Lets find the constant that best fits the data minimize n EC Zh a c i0 if C is a minimizer then EC 0 derivative at a maxmin is zero E C7 2 7C72 2 r2n1C7 E C2n1 Positive Set 0 and solve for c Squares Approx39 Approximation Theory Discrete Least Squares Dis as res ietn e Discrete Least Squares Discrete Least Squares Approximation We have chosen the sum of squares measurement for errors Lets find the constant that best fits the data minimize n EC Zh a c i0 if C is a minimizer then EC 0 derivative at a maxmin is zero E C7Z2 iCi2Z 2n1C E C2n1 i0 i0 Positive Set 0 and solve for 0 hence 1 quot 7 C 7n1 7 itisaminsmceE C72n1gt0r is the constant that best the fits the data Note C is the average Peter Blomgren Discrete Least Squares Approximation Approximation Theory Discrete Least Squares Disci Leash ares i Discrete Least Squares Discrete Least Squares Linear Approximation The form of Least Squares you are most likely to see Find the Linear Function p1x 30 i 31X that best fits the data Approximation Approximation Theory Di Di ele Least Squares Lea pares i r Dis 919 Least Squares Discrete Least Squares Linear Approximation The form of Least Squares you are most likely to see Find the Linear Function p1x 30 i 31X that best fits the data The error Eaoal we need to minimize is Eao7 a1 Z 30 alxi 7 fi2 i0 Squares Approx39 Approximation Theory Dis D Le err IS 2 7 Discrete Least Squares Discrete Least Squares Linear Approximation The form of Least Squares you are most likely to see Find the Linear Function p1x 30 l 31X that best fits the data The error Eaoal we need to minimize is Eao7 a1 Z 30 alxi 7 fi2 i0 The first partial derivatives with respect to 30 and 31 better be zero at the minimum a n 6730Eagal 2aoalxie o in 7 2ii 7 f a o 631 30731 7 i0Xr 3031Xr 7 7 r We massage these expressions to get the Normal Equations m Peter Blomgren Discrete Least Squares Approximation Approximation Theory Discrete Least Squares Disci Least ares i Discrete Least Squares Linear Approximation The Normal Equations in 30 a1Xi fi 0 in 30 31Xi fi 0 i0 Approximation Approximation Theory Discrete Least Squares Disci Least ares i Discrete Least Squares Linear Approximation The Normal Equations Approximation Approximation Theory Discrete Least Squares Disci Leash ares i Discrete Least Squares Linear Approximation The Normal Equations n n aon1 alZXi 2f i0 10 n n aoZXr Fauzjxi2 lefi ii i0 i0 Since everything except a0 and a1 is known this is a 2 by 2 system of equations Approximation Approximation Theory Di Di ele Least Squares Lea Myes i v Dis 919 Least Squares Linear Approximation The Normal Equations aon1 alzxi Zquot IO 39 10 n n n 2391fo 2m i70 i0 i0 Since everything except a0 and a1 is known this is a 2 by 2 system of equations n i 1 ZX Squares Approx39 i i Dis 919 Least Squares Approximation Theory Di ele Least Squares Di Lea aes Quadratic Model pgx For the quadratic polynomial p2x 30 l 31X l BQXZ the error is given by n 2 E307 317 32 Z 30 31Xi 32X2 fil i0 At the minimum best model we must have 8 n 8730Eao7 31 32 2 lg 30 l 31X l agxiz 7 O 8 n a aoiahaz 22M 3031Xi32X2fil 0 8 n E aoi 317 32 2 30 31Xi 32X fil 0 Squares Approx39 Approximation Theory Di ele Least Squares l Di Lea ares Dis 919 Least Squares Quadratic Model The Normal Equations Similarly for the quadratic polynomial p2x 30 l 31X l BQXZ the normal equations are n n n aon1 alZXi azzjxr2 15 i0 39 10 i0 n n n n 2 aozx m 2m i0 i0 i0 i0 n n n n aozxz D azzxi xz i0 i0 i0 i0 Note Even though the model is quadratic the resulting normal equations are linear The model is linear in its parame ters 30 al and 32 m Squares Approx39 Approximation Theory Discrete Least Squares Disci Leash ares i Discrete Least Squares The Normal Equations As Matrix Equations We have n a0n1 a2 n 0 aOZXi i0 r 0 i0 n n aozx D azzxi i0 i0 i0 Approximation Approximation Theory Di ele Least Squares 1 Di Lea ares Dis 919 Least Squares The Normal Equations As Matrix Equations We rewrite the Normal Equations as r1 r1 n 1 E X E X2 E f i0 10 10 r1 r1 r1 a0 r1 2 3 E X E X E X a1 E Xf 10 10 i0 32 i0 r1 r1 r1 r1 2 3 4 E X E X E X E Xf 10 10 10 10 It is not immediately obvious but this expression can be written in the form ATAS ATf Where the matrix A is very easy to write in terms of X Jump Forward Squares Approx39 Approximation Theory rele Least Squares Least 3995 hiscrete Least Squares The Polynomial Equations in Matrix Form We can express the mth degree polynomial pmx evaluated at the points X 3031Xi32Xi2quot393mXImfi7 i0n as a product of an n 1 by m l 1 matrix A and the m l 1 by 1 vector 5 and the result is the n l 1 by 1 vector usually n gt m 1 X0 X02 X6 1 1 X1 X12 le 30 f1 1 X2 X22 X5 31 f2 1 2 rn 3 X3 X3 X3 Approximation Theory Discrete Least Squares Dis elem Leash zqilai39es Discrete Least Squares Building a Solvable System from A5 We cannot immediately solve the linear system A5 when A is a rectangular matrix n 1 by m 1 m 7 n We can generate a solvable system by multiplying both the left and right hand side by AT ie ATAa AT Now the matrix ATA is a square m 1 by m l 1 matrix and ATf an m 1 by 1 vector Let39s take a closer look at ATA and ATE Peter Blomgren Discrete Least Squares Approximation Approximation Theory Discrete Least Squares Disci Least ares 1 1 X0 X1 2 2 X0 X1 m m X0 X1 1 X2 X2 X2 Computing ATA 0000 0000 I lllllll OOOO Xn i Discrete Least Squares Approximation Approximation Theory Discrete Least Squares Di Least ares 1 Discrete Least Squares Computing ATA 1 X X2 xm 1 1 1 1 1 0 g 9quot 1 X1 X1 x1 x x x x x 3 E i 3 Z 1 x2 x22 x39quot x0 xL x2 x3 X 2 3n 1 X3 X3 x3 Approximation Approximation Theory Discrete Least Squares Di Least ares 1 Discrete Least Squares Computing ATA 1 1 1 1 1 1 X0 X3 Xom 1 X X2 Xm X0 X1 X2 X3 Xn 1 1 12 1m X2 X2 X2 X2 X2 X2 X2 X2 0 1 2 3 n 1 X3 X32 X 39n m m m m m X0 X1 X2 X3 Xn 1 X X2 ij n n n1 o o 2 OXI O O O O 7 O O O O O O Approximation Approximation Theory Discrete Least Squares Di Least ares 1 Discrete Least Squares Computing ATA 1 X X2 xm 1 1 1 1 1 0 g 9quot 1 X1 X1 x1 x x x x x 3 E i Z Z 1 x2 x22 x quot X0 X1 X2 X3 Xn 2 3n 1 X3 X3 x3 rn rn rn rn X0 X1 X2 X3 Xn Xm J n 0000 0000 Approximation Approximation Theory Di ele Least Squares Di Lea prams 1 r Dis 919 Least Squares Computing ATA 1 1 1 1 1 1 X0 X3 X6 2 m 1 X1 X1 X1 X0 X1 X2 X3 Xn 1 X X2 Xm X X X X2 X2 2 2 2 0 1 2 3 39 39 39 1 2 m X3 X3 X3 Squares Approx39 Approximation Theory Di ele Least Squares Di Lea Myes 1 r Dis 919 Least Squares Computing ATf f0 n 1 1 1 1 1 f1 0 f X0 X1 X2 X3 Xn f2 Zi0 Xf 2 2 2 2 2 n 2 X0 X1 X2 X3 Xn f3 210 Xi fi m m m m m n I m on X1 X2 X3 Xn J Lfnj L i0Xi fIj We have recovered the Normal Equations Jump Back Squares Approx39 A Simple Powerful Approach Di La 5r 13905 Discrete Least Squares Discrete Least Squares A Simple Powerful Method Given the data set i where i X07X17 7Xr T and f 1 f1 7fnT we can quickly find the best polynomial fit for any specified polynomial degree Notation Let 52 be the vector X3747 XiiT Eg to compute the best fitting polynomial of degree 3 p3x 30 l 31X l 32X2 l agx3 define 2 i3 l 7 and compute 5 ATA 1ATf 4 4X1 Peter Blomgren Discrete Least Squares Approximation A Simple Powerful Approach Di e Squ Discrete Least Squares Discrete Least Squares Matlab Example used this code to generate the data for the plots on slide 2 x 0015 7 The xivector f 1xxA225 7 The underlying function n randnsizex 397 Random perturbations in fn 7 Add randomness A x onessizex 7 Build A for linear fit 7a AWAA f 397 Solve a 397 Better Equivalent Solve valua A f p1 polyvalax te A quadratic fit 397 E XA2 x onessizex 397 A for Za A AA f quot Solve a 39 397 Better Equivalent Solve 397 A f p2 polyvalax Evaluate Approximation A Simple Powerful Approach e 5qu Discrete Least Squares But I do not want to fit a polynomial Fitting an exponential model gx be to the given data d7 is quite straight forward Approximation A Simple Powerful Approach m e 5qu Discrete Least Squares But I do not want to fit a polynomial Fitting an exponential model gx be to the given data d7 is quite straight forward First re cast the problem as a set of linear equations We have be compute the natural logarithm on both sides lnb c Xlnd V V f a0 a1 f Approximation Appi39o umallmi Theory crate Laasl bquai39ns A Simple Powerful Approach Discrete Least Squares But I do not want to fit a polynomial Fitting an exponential model gx be to the given data a is quite straight forward First re cast the problem as a set of linear equations We have 07 397 be 7 1707n compute the natural logarithm on both sides lnb c X ln d V V f a0 a1 f Now we can apply a polynomial least squares fit to the problem and once we have at7 31 b e30 and c 31 Note This does not give the least squares fit to the original prob lem It gives us a pretty good estimate 53g Discrete Least Squares Approximation Peter Blomgren A Simple Powerful Approach Di La 5r 13905 Discrete Least Squares But That is not a True Least Squares Fit Note Fitting the modified problem does not give the least squares fit to the original problem In order to find the true least squares fit we need to know how to find roots andor minimamaXima of non linear systems of equations Feel free to sneak a peek at Burden Faires chapter 10 Unfortunately we do not have the time to talk about this here What we need Math693a Numerical Optimization Techniques Some of this stuff may show up in a different context in Math 562 Mathematical Methods of Operations Research m Peter Blomgren Discrete Least Squares Approximation Le Sq D we Lem Squares A Slmple Powerml Approach Homework 8 Due Friday 1242009 Part 1 BF 818 Is this a good prediction model WhyWhy not BFS 1 12 Discrete Least Squares Approxim Discrete Least Squares Application Cricket Tirermometer Cricket Thermometer Application Example source Joe Mahany There is a folk method of approximating the temperature in Fahrenheit This entered the scientific literature in 1896 by Dolbear with data collected by the Bessey brothers in 1898 The temperature is approximated from the rate of crickets chirping by taking the number of chirpsmin dividing by 4 and adding 40 Temperature a ChirpS per minute N 3 Peter Blomgren Discrete Least Squares Approximation Discrete Least Squares Cricket Data Analysis Application Cricket Tirermometer C A Bessey and E A Bessey collected data on eight different crickets that they observed in Lincoln Nebraska during August and September 1897 The number of chirpsmin was N and the tem peratu re was T Create matrices 1 N1 1 N1 A1 1 N2 A2 1 N2 1 N1 N12 N13 1 N1 A3 A4 1 N2 N12 N22 N12 N13 Ni N22 N23 N3 Peter Blomgrei Discrete Least Squares Approximation Discrete Least Squares Application Cricket Tirermometer Cricket Linear Model Ifyou compute the matrix which you never should T 7 52 7447 A1A1 lt7447 1133259 7 it has eigenvalues A1 30633 and A2 1133308 which gives the condition number contA1TA1 36996 X 10 Whereas condA1 6082462 In Matlab A1T gives the parameters for best linear model T1N 02155 N 397441 53g Peter Blomgren Discrete Least Squares Approximation Discrete Lem Squares Application Cricket Thermometer Polynomial Fits to the Data Linear Cricket Thermometer Temperam re Chirpslmin Linear Fit m I i Discrete Least Squares Application Cricket Tirermometer Cricket Quadratic Model Similarly the matrix 52 7447 1133259 AZTAZ 7447 1133259 18113 X 108 7 1133259 18113 X 108 30084 X 1010 has eigenvalues A1 001957 A2 42706 A3 3100853 X 1010 which gives the condition number contAZTAZ 15371 X 1011 Whereas condA2 319206 X1057 and A2T7 gives the parameters for best quadratic model T2N 70100064076 N2 039625 N 27184891 535 Peter Blomgren Discrete Least Squares Approximation D eke Lem Swans Aural3 m c Polynomial Fits to the Data Quadratic CricketThermometer Temperature 65 en 55 an en mu MEI an 22m chlrpsmln Quadratic Flt w w I l Discrete Least Squares Application Cricket Tirermometer Cricket Cubic and Quartic Models The condition numbers for the cubic and quartic rapidly get larger with condA3TA3 63648 X 1016 and consAIM 11218 X 1023 These last two condition numbers suggest that any coefficients obtained are highly suspect However if done right we are only subject to the conditon numbers condA3 2522 X 108 condA4 1738 X 1011 The best cubic and quartic models are given by EU T4N 00000018977 N3 7 0001445 N2 050540 N 23138 7000000001765 IV4 000001190 IV3 7 0003504 IV2 06876 N 17r314 Peter Blomgren Discrete Least Squares Approximation D eke Lem Swans Aural3 m c Polynomial Fits to the Data Cubic CricketThermometer Temperature an 22m MEI chlrpsmln Cubic Flt gt I l D eke Lem Swans Aural3 m c Polynomial Fits to the Data Quamc CricketThermometer Temperature 65 en 55 an en mu MEI an 22m chlrpsmln Quartlc Flt w w I l Discrete Least Squares Application Cricket Tirermometer Best Cricket Model So how does one select the best model Visually one can see that the linear model does a very good job and one only obtains a slight improvement with a quadratic Is it worth the added complication for the slight improvement It is clear that the sum of square errors SSE will improve as the number of parameters increase From statistics it is hotly debated how much penalty one should pay for adding parameters Peter Blomgren Discrete Least Squares Approximation w umunm h W313ng 0mme Cmm 1 Let n be the number of data points 55E be the sum of square errors and let k be the number of parameters in the model BIC nnEn knn mwm wmmm W AIC 2k nn27rEn1 Peter Blomgren Discrete Least Squares Application Cricket Tirermometer Best Cricket Model Analysis Continued The table below shows the by the Akaike information criterion that we should take a quadratic model while using a Bayesian Information Criterion we should use a cubic model Returning to the original statement we do fairly well by using the folk formula despite the rest of this analysis Peter Blomgren Discrete Least Squares Approximation Numerical Analysis and Computing Lecture Notes 15 Approximation Theory The Fast Fourier Transform with Applications Peter Blomgren ltblomgrenpeter gmail comgt Department of Mathematics and Statistics Dynamical Systems roup Computational Science Rsearch Center San Diego State University San Diego CA 921827720 httpterminussdsuedu Outline o Trigonometric Polynomials 0 Trigonometric Interpolation Introduction 0 Historical Perspective w the FFT 9 Applications of the FFT 0 Recap Notes 84 Historical Perspective 0 1080p Video Image Processing Fall 2009 m m The Fast Fourier Transform wAppiications 7 133 The Fast Fourier Transform wAppiications 7 233 Trigonometric Polynomials Least Squares 5 Interpolation Why use lnterpolatory Trigonometric Polynomials Last Time We used trigonometric polynomials ie linear InterPOIZi tion 0f large amounts 0f equally Spaced data by combinations of the functions trigonometric polynomials produces very good results close to 1 optimal cf Chebyshev interpolation 0X E S A ome Icatlons kX coskx7 k 1n pp DHAX sinkx k 1 n 7 1 0 Digital Filters Lowpass Bandpass Highpass 7 7 7 0 Signal processinganalysis to find least squares apprOXImations where n lt m to equally o Antenna design and analysis spaced data 2m pomts in the interval 77r at the node pomts Q t h o uan um mec anics Xj77rj7rmy012m71 I OPtICS This Time We Will find the lnterpolatory n m trigonometric 0 Spectral methods numerical solutions of equations polynomials and we Will figure out how to do it fast 0 Image processinganalysis 5331 5353 The Fast Fourier Transform wAppiications 7 333 The Fast Fourier Transform wAppiications 7 433 Interpolatory Trigonometric Polynomials Let X be 2m equally spaced node points in n7r and f fX the function values at these nodes We can find a trigonometric polynomial of degree m PX E Tm which interpolates the data Smx cosmx l 3lt coskx l bk sinkx7 k1 m1 10 2 1 2m ak i Z coskxj bk m 1 Z sinkxj 10 The only difference in this formula compared with the one corresponding to the least squares approximation Snx7 n lt m is the division by two of the am coefficient Where does the factor of 2 come from Finding the Factor of 2 Breakdown of the Lemma The factor of two comes from the failure of the second part of the lemma which we showed last time Lemma If the integer r is not a multiple of2m then 2m1 2m1 Z cosrxj Z sinnj O j0 j0 Moreover ifr is not a multiple of m then 2m1 Z cosrxj2 Sirllmjll2 m Now since n m we end up with one instance the cosij sum m where r m and the lemma fails m The Fast Fourier Transform wAppiications 533 The Fast Fourier Transform wAppiications 638 Finding the Factor of 2 Computing the Sum Historical Perspective New Since we are lnteVPOlatlngr the heels fundlon Much of the analysis was done by Jean Baptiste Joseph Fourier mX C0507 395 Part Of the set When we comPUte lt mv mgt in the early 18005 but the use of the Fourier series representation we need was not practical until 1965 2m 1 2 2m 1 1397 2 Why The straight forward implementation requires about 4m2 Z lcoslmxfll Z lcoslTlrm T In ll operations in order to compute the coefficients 5 and b j0 j0 m4 In 1965 Cooley and Tukey published a 4 page paper titled An Z cosw m7r2 algorithm for the machine calculation of complex Fourier seriesquot in 10 thejournal Mathematics of Computation The paper describes 2m1 an algorithm which computes the coefficients using only 120 quot Om log2 m operations 390 I It is hard to overstate the importance of this paper 2m1 1 m The algorithm is now known as the Fast Fourier Transform or 0 39 u H 1 m Just the FFT m The Fast Fourier Transform wAppiications 733 The Fast Fourier Transform wAppiications 333 Computing the FFT Computing the FFT Reduction of Operations 1 of 3 Instead of generating the coefficients 5 and b separately we define the complex coefficient ck m71kalt ibk and consider the Fourier transforms with complex coefficients 2m71 where ck Z ije kmm 2m71 smX E E Ckeler k0 j0 The reduction of the number of required operations come from the fact that for any n E Z e mr cosn7r I39Si 7r 1 Suppose m 2P for some p E Z then for k O7 17 m 7 1 2m71 Ckcmk Z 6eik7rjmeimk7rjm j0 2m71 Z fjeikwjm1 ei7rj39 j0 Using the fact that 2 ifj is even Mr 7 7 16 o ifjisodd only half the terms in the sum need to be computed ie m71 Ck Cmk 2 Z fzjelkWZJm j0 539 m The Fast Fourier Transform wAppiications 7 933 The Fast Fourier Transform wAppiications 7 1033 Computing the FFT Reduction of Operations 2 of 3 Computing the FFT Reduction of Operations 3 of 3 The new sums have the same structure as the initial sum We have m7 ck cm 2 Z fzjerkrrzim 10 In a similar way we can get m7 ck 7 cmk 2eik7rm E f2jlelllt7r21m39 j 0 We now need m m 1 complex multiplications for each k 01 7m 7 1 to compute these sums that is m2m 1 2m2 m operations We reduced the number of operations from 4m2 Big WhoopTM to 2m2 m m Observation so we can apply the same operationsreducing scheme again which reduces the 2m2 part of the operation count Our total operation count is down to m2 l 2m After repeating the same procedure r times we are down to operations m2 PMquot Since m 2quot we can keep going until r p 1 and we have 22p Fmp 1 2mpmm3mmlog2mOmlog2m operations 95 The Fast Fourier Transform wAppiications 7 1233 The Fast Fourier Transform wAppiioations 7 1133 Comparing Operation Counts 1 of 2 Comparing Operation Counts 2 of 2 m 4m2 3m m og2 m Speedup m 4m2 3mmlog m speedup 1048576 4398046511104 24117248 182361 2 16 1024 112 9 8388608 281474976710656 218103808 1290555 64 16384 576 28 256 262144 2816 93 1024 4194304 13312 315 m 87 3887 608 roughly corresponds to an 8 Megapixel camera 4096 67108864 61440 1092 16384 1073741824 278528 3855 If a 38GHz Pentium chip could perform one addition or 65536 17179369184 1245184 13797 multiplication per clock cycle which it can t we could compute 262144 274877906944 5505024 49932 1048576 4398046511104 24117248 182361 the Fourier coeffICIents for the 8 Megapixel image in 4194304 70368744177664 104857600 671088 8388608 281474976710656 218103808 1290555 Slow FT 16777216 1125899906842624 452984832 2485513 0057 seconds 20576 hours Each FFT secondquot translates roughly to 15 Slow FT daysquot 539 m The Fast Fourier Transform wAppiications 7 1338 The Fast Fourier Transform wAppiications 7 1438 Implementing the FFT The Fastest Fourier Transform in the West FFTW 1 of 2 Assigning implementation of the FFT falls in the category of cruel and unusual punishment Matlab implements fit the Fourier transform data 7 coefficients ifft the inverse Fourier transform coefficients 7 data and the helper functions fftshift ifftshift Shifting and unshifting the coefficient mostly for display purposes The 2 dimensional version i fft2 and n dimensional i fftn version of the FFT are also implemented m The Fast Fourier Transform wAppiioations 7 1533 httpwwwiftworg FFTW is a C Fortran subroutine library for computing the Discrete Fourier Transform DFT in one or more dimensions of both real and complex data and of arbitrary input size FFTW is free software distributed under the GNU General Public License Benchmarks performed on on a variety of platforms show that FFTW s performance is typically superior to that of other publicly available FFT software Moreover FFTW s performance is portable the program will perform well on most architectures without modification 5353 The Fast Fourier Transform wAppiioations 7 1638 The Fastest Fourier Transform in the West FFTW 2 of 2 Homework 9 Not Due Fall 2009 It is difficult to summarize in a few words all the complexities that arise when testing many programs and there is no best or quotfastestquot program However FFTW appears to be the fastest program most of the time for in order transforms especially in the multi dimensional and real complex cases Hence the name quotFFTWquot which stands for the somewhat whimsical title of quot Fastest Fourier Transform in the Westquot Please visit the benchFFT httpwwwfftworgbenchfft extensive survey of the results home page for a more The FFTW package was developed at MIT by Matteo Frigo and Steven G Johnson Matlab uses FFTW these days see help fftw m The Fast Fourier Transform wApplicalions 7 1733 Read the matlab help for fft ifft fftshift and ifftshift 1 Let xpipi8pi O1 Let f1cosx f2cos2x f3cos7x f4cos 8x f4cos9x g1sinx g2sin2x Compute the fft of these functions 2 Let f1cos2x Compute fftshiftltfftf Let g1sin3x Compute fftshift fft g 3 Use your observations from 1 and 2 to construct a lowpass filter Given a function f compute the fftf For a given keep only the coefficients corresponding to the N lowest frequencies set all the others to zero ifft the resu t Let f1cosXsin2x5cos7x and apply the above with N4 Plot both the initial and the filtered f 4 Use the FFT to determine the trigonometric interpolating polynomial of degree 8 for fX X2 cosX on 77r7r m The Fast Fourier Transform wApplicalions 7 1333 The Fast Fourier Transform Recap FFT Applications Figure Some additional reading Warn The Fast Fourier Transform wApplicalions 7 1933 The Fast Fourier Transform FFT is an Om log2m algorithm for computing the 2m complex coefficients ck in the discrete Fourier transform 1 2m71 27771 SmX E Z cke l x7 where ck Z ije kWJm k0 10 whereas the straight forward implementation of the sum would require 0m2 operations We noted last time that for a problem of size 223 87 388608 this reduced the computation time by a factor of a 2 weeks ie a computation can be done in 139 seconds using the FFTs would require 2t weeks to complete using the Slow FT 535 The Fast Fourier Transform wApplicalions 7 2033 Notes from mathworldwolframcom lof2 Notes from mathworldwolframcom 2 of 2 FFTs were first discussed by Cooley and Tukey 1965 although Gauss had actually described the critical factorization step as early as 1805 A discrete Fourier transform can be computed using an FFT if the number of points N is a power of two If the number of points N is not a power of two a transform can be performed on sets of points corresponding to the prime factors of N which is slightly degraded in speed An efficient rea Fourier transform algorithm or a fast Hartley transform gives a further increase in speed by approximately a factor of two Base 4 and base 8 fast Fourier transforms use optimized code and can be 20 30 faster than base 2 fast Fourier transforms Prime factorization is slow when the factors are large but discrete Fourier transforms can be made fast for N 2 3 4 5 7 8 11 13 16 using the Winograd transform algorithmquot this fact among others are used in the Fastest Fourier Transform in the West FFTW 539 m The Fast Fourier Transform wApplicalions 7 2133 The Fast Fourier Transform wApplicalions 7 2233 Example 1080p High Def Video Communications From www5pdeee5trathacuk 1 of 4 AfUll 108OP Hforanle has 1920X108O 23907339600 P39Xels39 At a In communications theory the signal is usually a voltage or a b39t39depth Of 24 bltsP39Xel39 3O framessecond39 3600 secondshour39 current and Fourier theory is essential to understanding how the and 2hoursmOV39e that Puts us at signal will behave when it passes through filters amplifiers and 1343692800000 bytesmovie 13 TBmovie communications channels Th BI f h h f H Even discrete digital communications which use O s or 139s to send e Hay ormat as t e O owmg Storage caPaClty information still have frequency contents This is perhaps easiest Blu ray to grasp in the case of trying to send a single square pulse down a Capacitylayer 25 GB channel La ers 2 y The field of communications over a vast range of applications from Total capaCIty 50 GB high level network management down to sending indIVIdual bits Minimum compre55ion 126 down a channel The Fourier transform is usually assoaated With these low level aspects of communications Some processmg is required m 5393 The Fast Fourier Transform wApplicalions 7 2333 The Fast Fourier Transform wApplicalions 7 2433 Communications From wwspdeee5trathacuk If we take simple digital pulse that is to be sent down a telephone line it will ideally look like this Voltage mV Tl2 quot m t If we take the Fourier transform of this to show what frequencies make up this signal we get something like fx sin 1 WW The Fast Fourier Transform wApplicalions Communications From wwspdeee5trathacuk Extending the example of the telephone line whenever you dial a number on a quot touch tonequot phone you hear a series of different 20f4 5393 7 2533 4of4 tones Each of these tones is composed of two different frequencies that add together to produce the sound you hear The Fourier transform is an ideal method to illustrate this as it shows these two frequencies eg Xfw w 2 The Fast Fourier Transform wApplicalions 5351 7 2733 Communications 3 of 4 From www5pdeee5trathacuk This means that the square pulse is a sum of infinite frequencies However if the telephone line only has a bandwidth of 10 MHZ then only the frequencies below 10 MHZ will get through the channel This will cause the digital pulse to be distorted eg Voltage m V Tiz 390 T12 t This fact has to be considered when trying to send large amounts of data down a channel because if too much is sent then the data will be corrupted by the channel and will be unusable m The Fast Fourier Transform wApplicalions 7 2638 Side Scan Sonar 1 of 3 From wwspdeee5trathacuk Another use of DSP techniques including DFTFFT is in sonar The example given here is side scan sonar which is a little different from the normal idea of sonar With this method a 65 kHz sound pulse is transmitted into the ocean toward the sea floor at an oblique angle Because the signal is transmitted at an oblique angle rather than straight down the reflected signal provides information about the inclination of the sea floor surface roughness and acoustic impedance of the sea floor It also highlights any small structures on the sea floor folds and any fault lines that may be present 5353 The Fast Fourier Transform wApplicalions 7 2333 Side Scan Sonar From www5pdeee5trathacuk 20f3 Side Scan Sonar From wwspdeeescrachacuk 3 Of 3 The image on the previous slide came from the United States Geological Survey USGS which is using geophysical Long Range ASDIC GLORIA sidescan sonar system to obtain a plan view of the sea floor of the Exclusive Economic Zone EEZ The picture element pixel resolution is approximately 50 meters The data are digitally mosaiced into image maps which are at a scale of 1500000 These mosaics provide the equivalent of quot aerial photographsquot that reveal many physiographic and geologic features of the sea floor To date the project has covered approximately 2 million square nautical miles of sea floor seaward of the shelf edge around the 30 coastal States Mapping is continuing around the American Flag Islands of the central and western Pacific Ocean m m The Fast Fourier Transform wApplicalions 7 2933 The Fast Fourier Transform wApplicalions 7 3033 Astronomy From www5pdeee5trathacuk 1 of 3 Astronomy From www5pdeee5trathacuk 2 of 3 Venus is Earth s closest companion and is comparable in it s size and diameter and yet scientists knew virtually nothing about it as it is perpetually covered with a cloud layer which normal optical telescopes can39t penetrate so Magellan a satellite had radar and advanced Digital Signal Processing that was designed to quotseequot through this cloud layer lt39s mission was map 70 of the planet with radar and to reveal surface features as small as 250 meters across The black and white pictures that it sent back were strips of the planets surface about 20km wide from the north pole to the south Pole Figure One example image of a surface feature called quotPandora CoronaH If you look at the image you will see a two black lines through the picture This is just a mismatch between the strips sent back by Magellan It also gives you an idea of the quot scale of the image as each strip is 20km wide The Fast Fourier Transform wApplicalions 7 3133 The Fast Fourier Transform wApplicalions 7 3233 Astronomy From www5pdeee5trathacuk 3 of 3 Example Image Processing 1 of 5 CEDRI 5 4 Figure John Tukey and his Fourier transform The coefficients corresponding to Figure A global map showing emissivity of the Venusis surface the low frequencies are in the center of the plot and the ones corresponding to the high frequencies are toward the edges The Fast Fourier Transform wApplications 7 3333 The Fast Fourier Transform wApplications 7 3433 Example Image Processing 25 Compression 2 of 5 Example Image Processing 25 Compression 3 of 5 e Figure Here we filter out N 75 of the Fourier coefficients and reconstruct the image emu 753 3 in mm The Fast Fourier Transform wApplicalions 7 3533 LL 71 an 753 3 in mm Figure Here we filter the N 25 of the Fourier coefficients with highest energy largest value in the absolute sense and reconstruct the image The Fast Fourier Transform wApplicalions 7 3638 Example Image Processing Highpass Filtering 4 of 5 Example Image Processing Lowpass Filtering 5 of 5 A I 4mm fan n in mm fan Figure Example of lowpass filtering the ima e We have filtered out the Fourier coefficients corresponding to the highest frequencies 993 See moms m The Fast Fourier Transform wApplications 7 3733 The Fast Fourier Transform wApplications 7 3333 Figure Example of highpass filtering the image We have filtered out the Fourier coefficients corresponding to the lowest frequencies

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.