Num Solutions Tech Prob
Num Solutions Tech Prob EAD 115
Popular in Course
Popular in Engineering Applied Sci Davis
EXSC 223 001
verified elite notetaker
This 258 page Class Notes was uploaded by Kiera Considine I on Tuesday September 8, 2015. The Class Notes belongs to EAD 115 at University of California - Davis taught by David Rocke in Fall. Since its upload, it has received 76 views. For similar materials see /class/187487/ead-115-university-of-california-davis in Engineering Applied Sci Davis at University of California - Davis.
Reviews for Num Solutions Tech Prob
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/08/15
EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science Computer Representation of Numbers Counting numbers unsigned integers are the numbers 0 1 2 3 4 5 6 7 8 9 10 11 In almost all computers these numbers are represented in binary base 2 rather than decimal We count 0 1 10 11 100 101 110 111 1000 1001 104 103 102 101 100 I I I I I s s 0 4 t 1 9 0X 10 0 4X 100 400 6 X 1000 6000 a 8 gtlt 10000 80000 86409 27 26 25 24 23 22 2 2 1 o 1 0 1 1 0 OX 0 1X 4 1gtlt 8 8 0x16 0 1x 32 32 Ox 64 0 b 1gtlt 128 E 173 Fixed length Integers Data storage is generally in bytes where 1 byte 8 bits With onebyte integers the smallest integer that can be stored is O and the largest is 111111112 28 1 255 Internet IP addresses consist of four bytes so that no part of an IP address exceeds 255 UC Davis is 1681502432 The IP address 1681502432 looks like this in binary 10101000 10010110 11110011 00000010 More Unsigned Integers Twobyte or 16 bit short integers can represent any whole number from O to 65535 Long integers of four bytes or 32 bits can represent any whole number from O to 4294967296 If each disk block has an address of a long integer and each disk block has 4196 bytes then the disk can hold 16TB Application Digital Audio Uncompressed digital audio can be represented as a sequence of loudness levels A pure tone has a sequence that evolve as a sine wave The loudness levels can be represented as unsigned integers giving all possible values Pure Tone 6bit Audio Sampling Rate The sampling rate is the number of times per second that a loudness measure is taken CD s are 44100 times per second 441 MHz Digital recordings are typically 441 48 96 or 192 MHz Word Length 8bit audio has loudness levels that exist in 28 256 discrete levels This is crude 16bit audio has 216 65536 loudness levels This is what is used for CD s Audio is often now recorded in 24bit audio which has 16777216 levels and is difficult to distinguish from the smooth original LoudestSound In 16bit audio the loudest sound that can be recorded has a numerical value of 65536 lfthe input in a recording goes over this level it is still recorded at 65536 This leads to distorted sound Pure Tone with no Headroom Signed integers i1ioioioioioioioiiioiii i1i1i i1i Number Sign 16 bit signed integers can represent any whole number from 32767 to 32767 Integer Overflow Suppose we are using onebyte signed integers which can represent any whole number from 128 to 128 What happens when we add 100 and 100 The answer should be 200 but 1100100 1100100 11001000 which has nine bits so is probably truncated to 1001000 or 72 Decimal Numbers Decimal numbers or floating point vs fixed point are represented in scientific notation 1437526 1437526gtlt1O7 Exponent 7 mantissa 1437526 We represent this in binary on a computer Signed exponent y l Va T Sign a moon Typical singledouble precision 1 sign bit 811 exponent bits one sign 2352 bit mantissa Hypothetical 7bit Reals 1 sign bit 3 exponent bits 3 mantissa bits Mantissa normalized to be between 05 and 1 to avoid wasting bits we don t want to use a mantissa of 001 when we could use a mantissa of 100 instead since 100 and 101 for example look the same when truncated 21 20 2 12 2 2 3 01111 Imagnhude 5399 0f 5399 Of of mantissa nunwber exponent Nbgnhude of exponent Smallest Positive Number Sign 0 positive Exponent sign 1 negative Exponent magnitude 11 3 in decimal Mantissa smallest normalized is 100 next smallest is 011 which has a leading O 100 represents 2391 05 in decimal Smallest positive number is 2394 116 If we divide this by 2 we get 0 underflow Largest Positive Number Sign 0 positive Exponent sign 0 positive Exponent magnitude 11 3 in decimal Mantissa largest normalized is 111 1112 2391 2392 2393 0875 in decimal Largest positive number is 7 If we multiply this by 2 we get overflow Many Numbers Cannot be Represented Exactly 13 in our 7bit real has the following representation This is 375 instead of 3333333 because that is as close as it can get When multiplied by 3 the result is 125 instead of 1 Limitations of Floating Point There is a limited range of quantities that can be represented There is only a finite number of quantities that can be represented in a given range Chopping truncation or rounding of numbers that cannot be represented exac y Machine Epsilon Machine epsilon is the largest computer number a such that 1 a 1 O Excel uses double precision which has 52 bit mantissa 23952 1016 so machine epsilon is about that size Some Excel Arithmetic a 1 a 1 1E13 1E13 1E14 0999E14 5E15 511E15 1E15 o Precision and Accuracy Precision means the variability between estimates Accuracy means the amount of deviation between the estimate and the true value Increasing precision Increasing accuracy b Errors of approximation True Value Approximation Error ET TV Approx True Relative error is 8T ET TV Absolute relative error is the absolute value of the relative error 8A EA Approximation Series truncation error 2 3 x x x e 1x 2 3 x x2 x3 6 1x 2 3 Roundoff Error Results from the approximate representation of numbers in a computer Accumulation over many computations Addition or subtraction of small and large numbers 2 n 11Zxl2 237x 372 i1 52 n 112xi2 2n 117xln 11m72 i1 i1 52 n 112xi2 2n 11m72 n 11m72 i1 52 n 112xi2 n 11m72 i1 Shortcut or Mistake The variance of the data set 1 2345 is 25 The variance of the data set 100000001 100000002 is the same because the spacing has not changed The shortcut formula gives 2 for the variance in Excel If the sequence starts 1000000001 the variance by the shortcut is 0 Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation is related to the first omitted term There are several forms for the error f quot61 fxfaf39ax a 239 Rn x 25 f WM 01 x an1 n 1 fn1 xa2w x a Rn n fxhfxf39xhh2hnRn hn1 n1 R n1f 5 Series Truncation Error In general the more terms in a Taylor series the less error In general the smaller the step size h the less error Error is Oh 1 so halving the step size should result in a reduction of error that is on the order of 2n1 In general the smoother the function the less the error Zero order x 1 xl fth k 1 Z I U quot 1 m 1 m f39xm f hz fXi1 Taylor Series Approximation of a Polynomial fx 01x4 015x3 05x2 025x 12 f0 12 f1 02 f0112 f390 025 f11 f0 025112 25 095 fquot0 1 2 f21 12 25 139 095 05 045 fx O1x4 015x3 05x2 025x 12 f012f390 025fquot0 1fquot390 09 fquotquot0 24f 0 011 gt 4 fx 12 025x 12x2 096x3 2424x4 f2x 12 025x l 12x2 f2X O5x2 025x 12 fx O1x4 015x3 05x2 025x 12 f1 02 f391 025 fquot1 22 fquot391 33 fquotquot1 24 flt gt1 0 n gt 4 fx 02 025x 1 l 222x 12 336x 13 2424x 14 f2x 02 25x25 11x2 22x 11 f2x 11x2 095x 065 Approximating Polynomials Any fourth degree polynomial has a fifth derivative that is identically zero The remainder term for the order four Taylor series contains the fourth derivative at a point Thus the order four Taylor series approximation is exact that is it is the polynomial itself The Taylor approximation of order n to a function fX at a point a is the best polynomial approximation to f at a in the following sense It is a polynomial It is of order n or less no terms higher than xn lt matches the value and first n derivatives of f at a A fnxa Taylor Series and Euler Method V dt g m Vxhvxv39xh39xh2 h3m M g jvm vquotx v39x l vx v dt g m dv vtl1 vtl E01 it R1 vtl1 vtl g vtl 41 t1 R1 Egi rl2 9 h2 0022 Nonlinearity and Step Size For the firstorder Taylor approximation the more nearly linear the function is the better the approximation The smaller the step size the better the approximation fx xm f m mxm l fx h fx f39ltxgth R1 R1 2 W mm A hz 2 2 gtlt2 D1 gtlt1 D1 0 03 0 04 0 05 0 06 1 1 1 1 O 002 004 006 008 1 1 1 1 016 018 gtlt2 p2 gtlt1 p2 00005 00015 00025 00035 0000 0002 0004 0006 0008 1 1 1 1 1 1 1 1 1 1 1 1 1 Numerical Differentiation fxi1 fxi f39xixil x1 Oxil xi2 fevem w femra trH em tri yfw szmii LH wm xi H xi w 0w First Forward Difference fxl1 fXl f39OCZW 0042 fxi fxi hfxil f39ltxlgt V7 0lthgt First Backward Difference fxl1 fxl f 39xih 05 fquotxlh2 0023 wafew fKampM05fampM Ow5 ngJ wa2fbwh0w5 fwmf0402F vh0w5 39 fxi1 fxi 1 fxl 2h 0h2 First Centered Difference 1pr w fx 01x4 015x3 05x2 025x 12 h 05xl 05 f05 925 f3905 9125 f0 12f1 02 f3905 02 9255 145 lal 9125 1459125 589 f3905 925 125 55 lal 9125 559125 397 f3905 02 125 100 lal 9125 1009125 096 fx 01x4 015x3 05x2 025x 12 h 025x 05 f05 925 f3905 9125 f25110351563f75 063632813 f3905 063632813 9255 1155 Isl 912511559125 265 f3905 925 1103515635 714 Isl 91257149125 217 f3905 063632813 1103515635 0934 Isl 9125 9349125 024 Summary of Example h 05 h 025 Forward 0589 265 Backward 0397 0217 Centered 0096 0024 Relative Error Second Forward Difference fquotxi i A2fxih2 h 2Altfxilfxigt f quotltxgt i h 2 fx2 mm fX1 fltxigt f quotm i W foe2 2fxl1 foe A fz39Hfz39 113 Ffz fz391 IF 1 fz39Hfz39 AF I A2 F I2 F2 2FII2 F2 2FI VJ l 1 B V2 I B2fl I ZBlB2fl V2 2fz39 1fz39 2 E39fz fz 1fz39 1FBfi szl F B2fl F2 2FBB2fl 3932 fi2 2fz39 fi 2 Propagation of Error Suppose that we have an approximation of the quantity X and we then transform the value of X by a function fX How is the error in fX related to the error in X How can we determine this if f is a function of several inputs fix 56x f9fxf39x f f xf 39x8 If the error is bounded g lt B f5E fx lt f39xB If the error is random with standard deviation 0 SD05 0 SDf5 i If 39xl 0 fquotx 2m 2 561 x1 9212x151 5525962 ZZZXZ l gz 11929552fxpx2 xlx2al f2x1x252 f1 2fxl9x2i ltx19x2 13x19x252 If the errors are bounded 5 lt3 gt f9219922fx19x2 f1x19x2 Bl f2x19x2le Stability and Condition If small changes in the input produce large changes in the answer the problem is said to be ill conditioned or unstable Numerical methods should be able to cope with ill conditioned problems Na39I39ve methods may not meet this requirement 55 x a The error of the input is a ex i 8 55 is the relative error of the input The error of the output is f 55 f x i 8f 39x and the relative error of the output is f39x i f3955 f x f 55 The ratio of the output RE to the input RE is f39x xf39x i if39 8 xfx f x f 55 EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science Curve Fitting Given a set of n points xi yi find a fitted curve that provides a fitted value y fx for each value of x in a range The curve may interpolate the points go through each one either linearly or nonlinearly or may approximate the points without going through each one as in leastsquares regression for o a a m h m Simple Linear Regression We have a set of n data points each of which has a measured predictor x and a measured response y We wish to develop a prediction function fx for y In the simplest case we take fx to be a linear function of x as in fx a0 a1x Criteria and Estimation If we have one point way 11 then many lines fit perfectly fx x fx 2x1 fx x2 Ifthere are two points say 11 and 23 then in general there is exactly one line going through the points fx 2x 1 lfthere are more than two points then in general there is no straight line through all of them These problems are respectively underdetermined determined and overdetermined Reasonable criteria for choosing the coefficients a and b in fx a0 a1x lie in the residuals ri yi fxi yi a0 a1xi but how to rate different residuals Measurement 10 11X The leastsquares criterion minimizes SS Zrz z ZOO fxi2 i1 i1 There are many other possible criteria Use of the leastsquares criterion does not imply any beliefs about the data Use of the linear form for fx assumes that this straightline relationship is reasonable Assumptions are needed for inference about the predictions or about the relationship itself Mininimize Sum of Residuals Minimize Sum ofAbsolute Values of Residuals Minimize Max Residual I39 17 quot Outlier c Computing the LeastSquares Solution We wish to minimize the sum of squares of deviations from the regression line by choosing the coefficients a0 and a1 accordingly Since this is a continuous quadratic function of the coefficients one can simply set the partial derivatives equal to zero I l SSa0 a1 in i104 fxi2 20439 a0 alxz392 i1 i1 2 M 22nyi a0 a1xl 2Znyi Zna0 ialxi aao i1 i1 i1 i1 2 W 22nyi a0 a1xlxl 2ixiyi iaoxl iale 8611 i1 i1 i1 i1 naO alixl iyl i1 i1 1 1 1 1 1 1 2 6102 xi a1 xi 2 xiyi i1 i1 i1 0 O These normal equations have a unique solution as two equations in two unknowns The straight line that is calculated in this way is used in practice to see if there is a relationship between x and y It is also used to predict y from x It can also be used to predict x from y by inverting the equation We now look at some practical uses of least squares Quantitative Prediction Regression analysis is the statistical name for the prediction of one quantitative variable fasting blood glucose level from another body mass index Items of interest include whether there is in fact a relationship and what the expected change is in one variable when the other changes Assumptions Inference about whether there is a real relationship or not is dependent on a number of assumptions many of which can be checked When these assumptions are substantially incorrect alterations in method can rescue the analysis No assumption is ever exactly correct Linearity This is the most important assumption le is the predictor and y is the response then we assume that the average response for a given value of X is a linear function of X Ey a bX y a bx a a is the error or variability Regression when the Assumptions are Satis ed Regression with nonlinearity In general it is important to get the model right and the most important of these issues is that the mean function looks like it is specified If a linear function does not fit various types of curves can be used but what is used should fit the data OthenNise predictions are biased Independence It is assumed that different observations are statistically independent If this is not the case inference and prediction can be completely wrong There may appear to be a relationship even though there is not Randomization and control prevents this in general Lack of Independence Lack of Independence Note no relationship between x and y These data were generated as follows x1 y1 O x11 095xl 81 yi l l 095 77239 Constant Variance Constant variance or homoscedacticity means that the variability is the same in all parts of the prediction function If this is not the case the predictions may be on the average correct but the uncertainties associated with the predictions will be wrong Heteroscedacticity is nonconstant vanance Confidence and Prediction Limits Regression Lin e Con dence Interval for Line Prediction Interval 10000 5000 Confidence and Prediction Limits Consequences of Heteroscedacticity Predictions may be unbiased correct on the average Prediction uncertainties are not correct too small sometimes too large others lnferences are incorrect is there any relationship or is it random Normality of Errors Mostly this is not particularly important Very large outliers can be problematic Graphing data often helps If in a gene expression array experiment we do 40000 regressions graphical analysis is not possible Significant relationships should be examined in detail Consequences of Outliers Example Analysis Standard aqueous solutions of fluorescein in pgml are examined in a fluorescence spectrometer and the intensity arbitrary units is recorded What is the relationship of intensity to concentration Use later to infer concentration of labeled analyte Yl5963l7 2 S2 27 4 n 11 2 e t n i n0246802 11 n e C n O C intensity 10 25 20 15 concentration 10 regress intensity concentration Source 55 df MS Number of obs 7 F l 5 222753 Model 4l7343228 l 4l7343228 Prob gt F 00000 Residual 93678473l 5 l87356946 R squared 09978 Adj R squared 09973 Total 418280013 6 697l33355 Root MSE 43285 intensity I Coef Std Err t Pgtt 95 Conf Interval concentratn l930357 0409002 4720 0000 182522 2035495 cons l5l7857 2949358 515 0004 7597003 2276014 regress intensity concentration Source 55 df MS Number of obs 7 F l 5 222753 Model 4l7343228 l 4l7343228 Prob gt F 00000 Residual 93678473l 5 l87356946 R squared 09978 Adj R squared 09973 Total 418280013 6 697l33355 Root MSE 43285 intensity I Coef Std Err t Pgtt 95 Conf Interval concentratn l930357 0409002 4720 0000 182522 2035495 cons ll5l7857 2949358 515 0004 7597003 2276014 lt Slope regress intensity concentration Source 55 df MS Number of obs 7 F l 5 222753 Model 4l7343228 l 4l7343228 Prob gt F 00000 Residual 93678473l 5 l87356946 R squared 09978 Adj R squared 09973 Total 418280013 6 697l33355 Root MSE 43285 intensity I Coef Std Err t Pgtt 95 Conf Interval concentratn l930357 0409002 4720 0000 182522 2035495 cons l5l7857 2949358 515 0004 7597003 2276014 Intercept intensity at zero concentration regress intensity concentration Source 55 df MS Number of obs 7 F 22753 Model 4l7343228 l 4l7343228 Pr 30000 Residual 93678473l 5 l87356946 R 39978 Ad Table 39973 Total 418280013 6 697l33355 ROOL mom 2 43285 intensity I Coef Std Err t Pgtt 95 Conf Interval concentratn l930357 0409002 4720 0000 182522 2035495 cons l5l7857 2949358 515 0004 7597003 2276014 regress intensity concentration Source 55 df MS Number of obs 7 F l 5 222753 Model 417343228 1 417343228 Prob gt F 00000 Residual 936784731 5 l87356946 R squared 09978 Adj R squared 09973 Total 418280013 6 697133355 Root MSE 43285 intensity I Coef Std Err t lt T 95 Conf Interval concentratn est 0f overall model 0000 182522 2035495 COnS I JJJIUJI ILJ ZIJJJU 444 regress intensity concentration Source 55 df MS Number of obs F l 5 Model 417343228 1 417343228 Prob gt F Residual 936784731 5 l87356946 R squared Adj R squared Total 418280013 6 697l33355 Root MSE intensity I Coef Std Err t Pgtt 95 Conf 7 222753 00000 09978 09973 43285 concentrjg Variability around the regression line 3 2035495 2276014 25 20 15 10 10 15 concentration I intensity Fitted values Residuals 10 15 Fitted values 20 25 Use of the calibration curve 3 152193x 3 is the predicted average intensity x is the true concentration y 152 193 y is the observed intensity 2 a is the estimated concentration intensity concentration Measurement and Calibration Essentially all things we measure are indirect The thing we wish to measure produces an observed transduced value that is related to the quantity of interest but is not itself directly the quantity of interest Calibration takes known quantities observes the transduced values and uses the inferred relationship to quantitate unknowns Measurement Examples Weight is observed via deflection of a spring calibrated Concentration of an analyte in mass spec is observed through the electrical current integrated over a peak possibly calibrated Gene expression is observed via fluorescence of a spot to which the analyte has bound usually not calibrated Measuring Variation If we do not use any predictor the variability of y is its variance or mean square difference between y and the mean of all the y s If we use a predictor then the variability is the mean square difference between y and its prediction a MST n 1 1 y yr MSE n 2r 5219 yzf ltn 2gt1leyi ltao alxigt2 MSR SST SSE 1 25 20 15 10 10 15 concentration I intensity Fitted values Source SS df MS Model 417343228 1 417343228 Residual 936784731 5 187356946 Total 418280013 6 697133355 Multiple Regression If we have more than one predictor we can still fit the leastsquares equations so long as we don t have more coefficients than data points This involves solving the normal equations as a matrix equation yxB8 KBIZ n number of data points p number of predictors including constant yislxl x is 1gtlt p Bispxl gislxl Yisnxl Xisnxp Eisnxl Linearization of Nonlinear Relationships We can fit a curved relationship with a polynomial The relationship fx a0 a1x a2x2 can be treated as a problem with two predictors This can then be dealt with as any multiple regression problem Sometimes a nonlinear relationship can be linearized by a transformation of the response and the predictor Often this involves logarithms but there are many possibilities yzae x 1ny1na x Va 1xcb d a 1xcb amp lt HQ xcb y a lndy blncblnx y a 9X Intrinsic nonlinearity We can still solve the leastsquares problem even if fx is not linear in the parameters We do this by approximate linearization at each step GaussNewton There are other more effective methods but this is beyond our scope yl fxla0a1ang yl fxia0jal janaj f0xlaoajalajan9ja0 6109 fnxla0jaljan jan and 8 yfx8a01 e 1x8 foxj 1 f1xj aoije al jxj y f0ltxjgt f0ltxjgta0 a0j f1ltxjgta1 a1j y foxj i foxjAao f1xJAal Solve for Aao and A611 by linear least squares Repeat until convergence EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science OneDimensional Unconstrained Optimization Given a function fx find its maximum value or its minimum value We may not know how many maxima f has We need methods of finding local maxima We need methods of attempting to find the global maximum though this can be difficult x Root Maximum Root V fx Global Local maximum Kmaximum X Global minimum Local minimum Global and Local Optimization We will mainly discuss ways of finding local optima One method of attempting to find the global optimum is to find locate local optima repeatedly from diverse often random starting points This can be surprisingly effective considering how simple a method it is Other methods for global optimization include simulated annealing Markov chain Monte Carlo genetic and other evolutionary algorithms and tabu search These methods are more complicated than repeated restarted optimum finding but not necessarily more effective Biological or physical analogy does not guarantee good performance Hard work and careful tuning are required there are no magic black boxes Bracket Methods Suppose we have an interval that is thought to contain a single maximum of a function fx so that fx is increasing from the lower end to the maximum and decreasing from the maximum to the upperend We want a method similar to bisection for solving this problem Adding new points In bisection we add one additional new point in the middle and pick either the left or the right interval based on which one has the function changing sign This does not provide enough information for finding a maximum we will need at least two additional points for this Extremum f Eliminate maximum x x2 xl x 39 i f om x2 om x b No maximum he N No maximum he No maximum here No maximum here New Points Evenly Divide Interval m Even Interval Division Wastes a Function Evaluation m Golden Section Search uses Function Evaluations Efficiently m A V o C 0 IX x rxr2x 39 2 I39ZMX l rzr C 0 x r2 r1O JE l lt 2 gt 739 rx 2 r06180 061803 1f ltf2THm Irncrzmm Jr ltrzmzrr w r z m 4 r 1 4 mser MUMH m Lb Mlmmlxulim Importance of Number of Function Evaluations In small problems this does not matter Reduced function evaluations means faster performance This matters if a large number of optimizations needs to be performed This matters if one function evaluation is expensive in computation time Error Analysis for Golden Section Search At the end of each iteration we have an interval that is known to contain the optimum We analyze the case where the lefthand interval is discarded the other case is symmetric Old points are XI x2 x1 and xu New points are x2 x1 and xu x1 is guess x1x2 xlrxu xlxurxu xl xu x1 24 x1 2 27quot 1xu x1 236xu x xu x12xuxlrxu xl 1rxu x1 382xu x Example fx 2 sin x x2 10 Initial interval is 0 4 Graph function Use golden section search to find a maximum xl fl x2 f2 x1 f1 xu fu 0 0 15279 17647 24721 06300 40000 3114 0 0 09443 15310 15279 17647 24721 06300 09443 15310 15279 17647 18885 15432 24721 06300 09443 15310 13050 17595 15279 17647 18885 15432 13050 17595 15279 17647 16656 17136 18885 15432 13050 17595 14427 17755 15279 17647 16656 17136 13050 17595 13901 17742 14427 17755 15279 17647 13901 17742 14427 17755 14752 17732 15279 17647 Quadratic Interpolation If golden section search is analogous to bisection then the equivalent of linear interpolation false position is quadratic interpolation We approximate the function over the interval by a quadratic parabola and solve the quadratic Requires three points instead of two Quadratic approximation True maximum of maximum x True function Quadratic function Given three points find the quadraticjoining them meOCoD x19fx1 x29fx2 fxax2 bxc fx0 axg bx0 c fx1 ax12bx1c fx2 ax22 bx2 c fx0 zaxg bx0 0 fx1ax12 bx1 0 fx2ax22 bx2 0 x1 x1 x2 x2 7060 f X1 fX2 x fxOX12 x fxlx x fX2x xf 3 2fx0x1 x22fx1x2 x02fx2x0 xl lnitial endpoints are x0 and x2 nitial middle point is x1 New middle point as guess for the optimum is X3 Discard one of X0 or x3 using the same rule as golden section search Error estimate usually by change in estimate x0 x1 x2 x3 00000 00000 10000 15829 40000 3114 15055 17691 10000 15829 15055 17691 430000 3114 14903 17714 10000 15829 14903 17714 15055 17691 14256 17757 10000 15829 14256 17757 14903 17714 14266 17757 14256 17757 14266 17757 14903 17714 14275 17757 Newton s Method Open rather than bracketing method analogous to NewtonRaphson Also uses a quadratic model of the func on Quadratic model is at a point not over an interval Optimum is when the derivative is 0 Use NewtonRaphson on the derivative fx i fxl f 39xlXx xi 05f quotDeXx 9602 f 39x i f 3909 05f quotxigt206 xi 0 i f39xl fquotxlx x f quotxiXx x i f 3909 x x i f39xlfquotX M i X f 39xi f quot99 fx f x f X 250000 057194 210229 1 39594 099508 157859 088985 187751 145901 177385 009058 218965 142754 177573 000020 217954 142755 177573 000000 217952 Error behavior of optimization methods in dimension 1 Golden section search has linear convergence with ratio p and an error bound Quadratic interpolation has linear convergence and an error estimate from change in the estimate Newton s method has quadratic convergence and an error estimate from change in the estimate Golden Section Search x Et Ea I 1 15279 01003 09442 2 15279 01003 05836 3 15279 01003 03607 4 15279 01003 02229 5 15279 01003 01378 6 14427 00151 00852 7 14427 00151 00526 8 14427 00151 00325 Quadratic Interpolation x Et Ea 1 04276 15055 00779 05055 14903 00627 00152 14256 00020 00647 14266 00010 00010 Newton s Method x Et Ea 250000 107245 099508 043247 15049 146901 004146 04739 142764 9gtlt10395 00414 142755 5gtlt103910 9gtlt10395 Pitfalls of Newton s Method Different starting points may lead to different solutions Iterations may diverge The former requires repeated search for the optimal optimum global optimum The latter may require limiting step size or requiring an increase in function value at each iteration EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science Course Information httpdmrockeucdaviseducourseshtml If you forget the course site location google my name If you forget my name look it up in the course schedule under the course number EAD 115 If you forget the course number I probably can t help you Class Meetings TuTh 1210pm 130pm 55 Roessler Discussion Tu 310400 55 Roessler Office Hours By appointment Office and Lab 150 Med Sci 1C 7526999 FAX 7546793 email dmrockeucdavisedu Text Numerical Methods for Engineers Fifth Edition Steven C Chapra and Raymond P Canale McGraw Hill TAs Matthew Carpenter mhcarpenterucdavisedu Jaebum Park ibgarkucdavisedu Course Grading Midterm Examination 30 Final Examination 40 Homework 30 This course is an introduction to the use of computer numerical methods to solve problems in engineering and soience The emphasis is on understanding the methods available and the advantages and disadvantages of each Homework assignments will mostly require computer programming largely in Visual Basic in Excel and Matlab Reading assignments should generally be completed before class Some material in the reading will not be repeated in class and some material presented in class supplements the reading If you miss a class you are responsible for obtaining the notes from another student Homework assignments will be given weekly late assignments will not be given full credit The midterm examination will be given in class tentative date Thursday May 6 and the final examination will be Saturday June 7 800am 10200am Why Study Numerical Methods Solution of engineering and scientific problems can be done by theory or experiment An important third way is by computation We would rather have an approximate answer to the right problem than an exact answer to the wrong problem consider a spherical cow Mathematical Models If we can describe a problem via a mathematical model we can often find out much about the real problem by studying the mathematical model Simple mathematical models can be solved with pencil and paper Realistic models usually require computer solution The parachutist Downward force due to gravity Upward force due to air resistance Fma azFm dszm d r FFDFU FDng gi98mS2 FU 2 cv 507 407 60 7 0 0 3 2 562 0 1 20 15 10 Time Analytical vs Numerical Solutions The previous solution is analytical meaning that it is supplied by a single simple formula We can solve this problem also numerically Numerical solutions generalize What if the drag is not linear in the velocity dV 3 v dt g m Vti1 124 dt ti1 ti dt dv Vti1 Vaz39 Eltti1 ti 41 ti i Vti1 Val EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science Interpolation Given a set of points xi yi an interpolating function is one which is defined for all x in the range of the xi and which satisfies fxi yi Polynomials are a convenient class of functions to use for this purpose though others such as splines are also used There are different ways to express the same polynomial Given n points we can in general determine an n1 degree polynomial that interpolates them Linear function Quadratic function Cubic function Two points Three points Four points Degree one Degree Two Degree three a b C f X f X1 fx xo Linear Interpolation v u 3532 xo o o q o39o o o39o39o39o o o o o unuuuunu unnununwo unuuonnuu A Nvmmmsvwa mmavlttoflc flx fx0 fx1fx0 x1x0 fx1fx0 x1x0 x x f1xfxO 161 0 164 1386294 0 x xo 11x wwww 1386294 2 1 2 04620981 1n2 i 132 162 06931472 f X fx In x True f1 Linear estimates Quadratic Interpolation Three points determine a quadratic This should fit many functions better than linear interpolation We derive a general form for quadratic interpolation We then derive a method to estimate the three unknowns coefficients that determine a quadratic function fX fx In X Quadratic estimate Linear estimate f2x be 71x x0 72x XOXX x1 2 90 blx 9le 92x2 923ng1 9290cO 9290c1 which is of the form 2 a0 a1x a2x with do 2 b0 9le lazacox1 a1 2 b1 bzxo bzxi a2 2 72 which shows either form is general f2x 70 b1x x0 9206 onX x1 fxo 2170 b0 fx0 fx1 b0 b1x1 x0 fx1 fx0b1x1 9 b1 fX1fxO x1 x0 f2x 70 91x x0 72x XOXX x1 70 fx0 b fag mo 1 x1x0 m2 fltx0gt fquot1 fx0 x1x0 fX2fXo fX1fxO 962 960 19 x2 onXz x1 x1 x0 x2 x0x2 x1 fx2fx0 fx1fx0 x2 x0x2 x1 x1 onxz x1 2 x2 x0 92x2 onxz x1 b fx2fx0 fx1fx0 2 x2x0x2x1 x1x0x2x1 b2 fx2fx1 fx1fx0 fx1fx0 x2 x0x2 x1 x2 x0x2 x1 x1 x0x2 x1 b fx2fx1 fx1fx0x1x0x2 x0 x2 x0x2 x1 x2 x0x2 x1x1 x0 fx2fx1 fx1fx0 2 x2 x0x2 x1 x2 x0x1x0 fx2fx1 fx1fx0 b2 2 x2 x1 x1 x0 x2x0 fx2fx1 fx1fx0 b2 2 x2 x1 x1 x0 x2 x0 f2x 70 71x x0 7205 onx x1 90 fx0 b fx1fx0 1 x1 x0 looks like a nite rst divided difference fx2 fx1 fx1 fx0 b x2x1 xi x0 2 x2 x0 looks like a nite second diVided difference Approximate 1n2 06931472 by interpolating 1 0 41386294 61791759 0 fx0 0 bl fx1 fx013862940 204620981 x1xo f xl f x w M zwm b2 2 1 1 0 00518731 x2 x0 5 f2x 0 b1x x0 b2x x0x x1 f22 0 046209812 0 005187312 02 4 05658444 General Form of Newton s Divided Difference Interpolating Polynomials The order n polynomial interpolates n1 points The coefficients are finite divided differences They can be calculated recursively fnx b0 b1xx0b1xx0xx1quot39bnxx0xx1quot39xxn b0 fxO b1 fx17xo b2 fx2xlax0 b fxnxn1x1x0 fxixi1x2x1 fxi1xi2x1x0 xi xo fxixi1x1x0 x fx First Sezcml Third 0 x0 xo 4 fx1xqgt Xz Xv xo H xa X2 X1 X0 1 x um 41 nqu l Imam 2 X2 X2 4 My X1 3 X3 Xz 7 xi 15 25 fxi 0 0405465 0916291 1098612 1 386294 0 1st dd 081093 0510826 0364643 0287682 1 081093 2nd dd 020007 009746 005131 05 010003 3rd dd 0051307 0018459 025 001283 4th dd 001095 025 000274 0695331 0693147 000218 Lagrange Interpolating Polynomial Given n1 points and function values there is only one degreen polynomial going through the points The Lagrange formulation is thus equivalent leading to the same interpolating polynomial It is easier to calculate fncc ELImm n x Lix jii x x x This passes through each of the points because When x xk all of the L x are 0 except for Lk x which is equal to l EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science Numerical Integration Some functions of known form can be integrated analytically Others require numerical estimates because the form of the integrand yields no closed form solution Sometimes the function may not even be defined by an equation but rather by a computer program 4 2363613621 221212 1 4 1 4 4 2 7r2 J Slnxdx cosx 0 cos7r 2 0050 O11 fe xzdx 1 The Definite Integral rfxdxling 1 19 faib an a n gtooi0 n rfxdxlimi 19 faib an a 11900121 n Ibfxdxlimn 19 faib anb a2n a 11900120 n Left and right Riemann sums and the midpoint rule give definition not a good computational method Exact only for constant functions LR and RR or linear functions MR fx f X x 2 Example fx expX2 Use left Riemann sum Integrate from 0 to 2 Exact value is 0882 N Sum 4 1126 10 0980 20 0931 50 0902 100 0891 Trapezoidal Rule Simple Riemann sum approximates the function over each interval by a constant func on We can use linear quadratic etc instead for more accuracy Using a linear approximation over each interval results in the trapezoidal rule Linear Approximations over Short Intervals 7 fx f I Closed and Open Rules x a b Trapezoidal Rule for an Interval a f 61 19 f 19 xfa MOW b a Lbflomix Z faxx a2 0 WM 2b a 0quot m fa fab awba Ia onw f X f b f a AEmov T EmBIIv 8 S a B R Em zlv b a Trapezoidal Rule for a Subdivided Interval Divide the interval a b into n equal segments each of width ban Apply the trapezoidal rule to each segment Add up all the results This is much more accurate than the simple Riemann sum hb an xiaih i012n fX 05hf0f105hf1 f205hfn2 1105h 11 1 H P i J 05hf022fifnnh i1 2n Pmai 2n b a widthaverage height f Example fx expX2 Use trapezoidal rule Integrate from 0 to 2 Exact value is 08820814 N Sum 4 08806186 10 08818388 20 08820204 50 08820716 100 08820789 a Singlesegment b Multiplesegmeni FUNCTION Trap h f0 f1 FUNCTION Trapm 7 n f Trap 2 h f0 f2 sum To ENDTrap DOTJn l sum 2 sum Z END 00 sum sum 7 Trapm h sum2 END Trapn Simpson s Rules 1 f X Simpson s Rules Simpson s rules generalize the trapezoidal rule to use more than two points per interval so we can use quadratic or cubic models instead of linear We will mainly cover the quadratic model or Simpson s 13 rule Quadratic Interpolation For a single interval we will derive Simpson s 13 rule We will need to find the quadratic equation that goes through three points x1 fx1 X2 fX2 X3 fX3 We will then integrate the quadratic to obtain the estimate of the integral This also integrates cubics exactly f0fx0 f1fx1 f2fx2 hx2 x1x1 x0 xx1xx2 xx0xx2 xx0xx1 fx x0 x1x0 x2 0 x1xox1xzf1 x2 xox2 x1 f2 211220 x x1 xx x2fo 2ltx x0gtltx x2 x x0gtltx x1f2 I fltxdx 0 hxy 2W0 2m 21m m hwy 2h f0 46f1f2 L 2h2 2 f0 4 f1 f2 Widthaverage height 1 2h 2h 1 8 2 hd 3 h 2 h3 2h3 h3 I0 yy y 3y 2 y 0 3 3 2h Shw hxy Mwy y3 hy22h2y h3 6h34h3h3 0 2h 2jhyy 2hdy y3 2hy2 h3 8h3 2113 0 Simpson s 13 Rule for a Subdivided Interval Divide the interval a b into n equal segments each of width ban Apply the Simpson s 13 rule to each pair of segments Add up all the results This is more accurate than the trapezoidal rule f04f1f2f24f3f4quot39 144 13 1 2 q 24 1 1 1 4f12f24f32f4quot392 144 1 32 1 24 1 112 n 2m is even Example fx expX2 Use Simpson s rule Integrate from 0 to 2 Exact value is 08820814 N Sum 4 08818124 10 08820749 20 08820810 50 08820814 100 08820814 Simpson s 38 Rule Uses four points to fit a cubic polynomial Is not theoretically more accurate than the 13 rule but can use an odd number of segments We can combine this with Simpson s 13 rule if the number of segments is odd With 15 intervals 16 points this is 6 Simpson s 13 rule plus 1 of Simpson s 38 rule J23J23J2f3 baf03f13f2f3 8 Widthaverage height a a I FUNCTION Simpls m f0 f1 f2 FWUIDN SIMpIItlavbvn 51397quotle 2w f04xflf2 15 n 7 a n D END 51717013 F n 1 THEN sum Tram1AM b LSE FUNCTION 5777135 I W fl f2 f3 m impja Nu f03w f2f3 5 odd n 2 71vnn 2 END 5fnp38 1F and gt 0 AND H gt 1 THEN le sumSimp38Iv rfn 1 f m n 3 K FUNCTION SimpIJm h n H END IF sum FIB F m gt1 THEN 1 I n 2 2 sum sum Simplimfh f sum sum 4 f 2 x fw END IF E END IF sum sum I 1 fm Simplmt sum SimpJJm h 3 sum 13 ND Simplnt END Simpl m Theoretical Errors of NewtonCotes Methods Left and right Riemann integral formulas have errors of Oh In the case of a linear function y cdx for example integrated over the interval a b each approximating rectangle is missing a triangular portion whose base is h and whose height is dh and there are n such triangles h is the length of the interval divided by n so the total error is ndh2 2 dbah2 which is proportional to h Improving Left and Right Riemann Sums We can eliminate these triangles in two ways We can use a central Riemann sum that uses points in the middle of the intervals open rule This fits straight lines exactly We can use the trapezoidal rule which also fits straight lines exactly Both these have Oh2 errors Error in Simpson s Rule The error in Simpson s 13 rule is is Oh4 Compare this to left and right Riemann sums with errors at Oh and the central Riemann sum and trapezoidal rule with errors at Oh2 This means that in general Simpson s rule is more accurate at a given value of n It also gives information about changes of errors with n Absolute Errors of Three Integration Methods fx expx2 Integrate from O to 2 Exact value is 08820814 N RL Trap Simp 4 2X101 1gtlt1O393 2x10394 10 1gtlt1O391 2x10394 6gtlt10396 20 5gtlt10392 6x105 4x107 50 2gtlt10392 1x10395 1x108 100 1gtlt1O392 2gtlt10396 6gtlt1O391O EAD 115 Numerical Solution of Engineering and Scientific Problems David M Rocke Department of Applied Science Is the function available The NewtonCotes rules we have been looking at need a vector of function values The programs seen previously do not explicitly call a function rather use a provided grid of values These methods can also be used in the form where a function is called In the case that any value can be called other methods are available a FUNCTION Traqu n a b h 7 b 7 a n x a sum x DD 1 1 n 7 l x X 7 7 sum sum 2 x END DD sum sum fb Traqu b 7 a sum 2 n END Traqu b FUNCTION Simqu n a b h b 7 a n xa sumflt D07 ln722 Xxh sum sun14fx xh sumsum2fx ENDDD XXh sum sum 4 x sum sum Rb Simqu b 7 a sum3 n END 5739 qu True percem relative error 073 4 10 s 7 ID 6 Trapezoidai rule Limit of precisian Simpson s 13 rule Limit of precision 256 1024 4096 16384 123 512 2043 3192 Segmems Fixed Interval vs Functional Integration The NewtonCotes methods we have been describing all begin with a set of equally spaced function values Sometimes this is all that is available but we may be able to do better with some variation in the x s Richardson Extrapolation Given two estimates of an integral with known error properties it is possible to derive a third estimate that is more accurate We will illustrate this with the trapezoidal rule though the idea applies to any integration method with an error estimate b fxdx I h Eh For the subdivided interval trapezoidal rule Eh 0012 bf zahzf39K for some 95 in a b I 2 010 Eh1 h2Eh2 EVA fquot 1 hf Elth2gt hi f 2 hi EVA 2 15012 2 IhEh2iIh2Eh2 h2 h1 1 I h2Eh2 Ih which has error 001 1 2 For the special case where 112 112 1h21h1 7121722 1 21h21h23Ifh1 21h2 I 21h2Eh2Ih2 4 1 31h231h1 I026x2dx 08820814 02 08818388 n10 10 1 08820204 n20 101 102 08820810 comparable to Simpson39s rule with n 20 Repeated Richardson Extrapolation With two separate Oh2 estimates we can combine them to make an Oh4 estimate With two separate Oh4 estimates we can combine them to make an Oh6 estimate etc The weights will be different for these repeated extrapolations 1091209140 4 1 12010 Zglzo Iw 4 1 4020 2 I40 3120 16 1 1 402010 4020 2010 15 15 64 20 1 4 5140 4 5120 4 5110 Errors for Richardson Extrapolation from Trapezoidal Rule Estimates n T R1 R2 10 2X1O394 4 X 10397 20 6 X 10395 5 X 103911 3 X 10398 4O 2X1O395 Romberg Integration Let ljk represent an array of estimates of integrals k 1 represents trapezoid rules Oh2 k 2 represents Richardson extrapolation from pairs of trapezoid rules Oh4 k 3 represents Richardson extrapolation from pairs of the previous step at Oh6 etc If we double the number of points halve the interval at each step then we only need to evaluate the function at the new points For example if the first step uses four intervals it would involve evaluation at five points the second one would use eight intervals evaluated at nine points only four of which are new cs FUNCTION Rhomberg a b maxit es LOCAL 10 10 n l I Traqum a b iter 0 DO iter iter J n giter IfterHJ Traqum a b D0k 2 iter1 j 2 iter k M Z 41 1j1Jlt 1 Ijk 1 4H 1 END DD ea ABSIiiLerJ 1139ter 11739LerH 100 IF iter 2 maxit 0R ea 5 es EXIT END DD Rhomberg Immm END Rhomberg Romberg starting with 2 intervals 3 points 08770373 08818124 08820824 08820814 08820814 08806186 08820655 08820814 08820814 00000000 08817038 08820804 08820814 00000000 00000000 08819862 08820813 00000000 00000000 00000000 08820576 00000000 00000000 00000000 00000000 True value is 08820814 requires 17 function evaluations to achieve 7digit accuracy Simpson s rule requires 36 function evaluations and the trapezoid rule requires 775 Exact Integration The trapezoidal rule integrates a linear function exactly using two points Simpson s 13 rule integrates a quadratic and cubics also exactly using three points It is possible to take n1 evenly spaced points and choose the weights so that the rule integrates polynomials for degree n exactly eg Simpson s 38 rule Gaussian Integration Consider a function f on a closed interval a b We assume f is continuous We wish to choose n points in a b and weights so that the weighted sum of the function values at the n points is optimal Can be chosen to integrate polynomials of degree 2n1 exactly f X Two interior points can integrate more exactly than two end points f X b Two integrals that should be integrated exactly by the trapezoid rule Method of undetermined coefficients I fxdx i b mwzco a c b Trapezoid Rule f0x 1 and f1x x should be integrated exactly Eldx 2 601 611 b a h 2 CO 61 7de 2 6061 61 a 2 2 9 61 2C0aC1bzcoabaCOb b2 a2 2192 2611 2 260a b a 192 260a 9 b a 2 CO cl j b fxdx 2 00f0c1f1c22f2 Ibldx b a 60 61 62 2 2 19 61 bedxz CoaClab2Czb b3 3 bezdxz a 60a2Clab24Czb2 CO 62 b a6 614b a6 Simpson39s rule GaussLegendre Find n points in 1 1 and n weights so that the sum of the weighted function values at the chosen points integrates as high a degree polynomial as possible n points and n weights means 2n coefficients which is the number in polynomials of degree 2n 1 We find the twopoint GaussLegendre points and weights for 1 1 other intervals follow by substitution 1 I 1 mm comm elm Ella x 2 2 co c1 L1 xdx 0 60x0 61x1 1 2 2 2 2 I x dx 3 60x0 61x1 1 1 3 3 3 le dx 0 60x0 61x1 6026121 1 x0 1 Xl Gaussian Quadrature Gauss Legendre is highly accurate with a small number of points Suitable for continuous functions on closed intervals Gaussian quadrature also comes in other forms Laguerre Hermite Chebychev etc for functions with infinite limits of integration or which are not finite in the interval With n points GaussLaguerre integrates functions exactly that are multiples of wx e39X by polynomials of degree 2n1 exac y wx is called the weight function The weight function for GaussLegendre is wx 1
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'