### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Numerical Analysis I MA 580

NCS

GPA 3.93

### View Full Document

## 6

## 0

## Popular in Course

## Popular in Mathematics (M)

This 62 page Class Notes was uploaded by Braeden Lind on Thursday October 15, 2015. The Class Notes belongs to MA 580 at North Carolina State University taught by Staff in Fall. Since its upload, it has received 6 views. For similar materials see /class/223723/ma-580-north-carolina-state-university in Mathematics (M) at North Carolina State University.

## Reviews for Numerical Analysis I

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/15/15

76 Newton s Method in R and Quadratic Convergence Filst we will give the proof of quadratic convergence Proof of 2 We want to show x39 1 x S C x39quot xquot 2 whereFx 0 Use Proposition 7 IfF e C1D and quotF39y F39x w s yy x wthen Fy Fx F39xy xw s y x1 xm x 606quot Gxwhere Gx x Fx391Fx xm 96 xm 96 F 39xm 1Fxm F39xm 1Fx where Fx 0 xm F39xm 1F39xmxm 96 FOCM Fx X quot95quot s quotF39x quot 1wF39x x quot x Fx quot Fxw s cale wall 2 no Example gu Use FDM uH1 2ul uH1 gu1 h2gu1 2 u11 u171 0 i1 n uo um1 given FRquot gt Rquot is de ned by F depends only on u 1 11 in fmz 0 01 W fm fmz flu 0 M 1 g39ul 2 1 0 01 quotHWWW 1 h2g39u2 2 1 0 0 L0 0 0 hzg39ux g39vx 0 0 01 1 39 0 0 Fu Fv 0 039039 0 0039 If g is continuous then g39u1 g39v1 gquotcul V1 and so I 0 F39u F39uw s Cllu vILO Consider g cu u4 for Stefan s Law for radiative cooling The following is a Matlab code which uses Newton39s method to solve the 1D dif lsion problem with heat loss due to radiation We have used the Matlab Ad to solve each linear subproblem One could use an iterative method and this might be the best way for larger problems where there is diffusion of heat in more than one direction We have experimented with several emissivities and the graphs indicate the higher the emissivity the more the cooling Matlab Code for Nonlinear Cooling with Diffusion n0ninm uo 900 n 19 h lnl FP zerosn F zerosnl u onesnluo for m l20 for i ln ifi Fi fnonluihh uil 2ui uo FPii ionlpuihh 2 FPiil l elseifiltn Fi fnonluihh uil 2ui ui l FPii ionlpuihh 2 FPii l l FPiil 1 else Fi ionluihh 2ui ui l uo FPii ionlpuihh 2 FPii l 1 end end du FPF end u u du error normF if errorlt0001 break end In error uu 900 u 900 X 0hl plotowu function fnonl ionlu fnonl 000005300A4 uA4 function fnonlp ionlpu fnonlp 0000054u 3 Figure Temperatures for Different Emissivities The next calculations illustrate the very rapid convergence of Newton s method I Newton Iteration m Norm of F error 1 7061416 2 1974837 3 492847 4 82123 5 3967 6 0011 7 73703e09 Table Rapid Convergence Example K Ku g gu VOKVu g in 2D Now use the ve point FDM so that f1 ul vjuljul vj 0F flj where ij refers to the space grid The components of the matrix F u will be the partial derivative of j with respect to the unknowns ui j So F u must be pentadiagonal and it will have the same nonzero pattem as in the constant case The computation of the components of F u will now be more complicated The linear solve steps can be done by direct or iterative methods In the case of 3D space problems one may need to use GMRES or some variation MASSO Numerical Analysis I Zhilin Li Z Li Chapter 1 Introduction I Why is this course important motivations What is the role of this class in the problem solving process using mathematics and computers I Model problems and relations with course materials Errors definition and how to avoid them In Fig11 we show a flow chart of a problem solving process In this class we will focus on numerical solutions using computers especially the problems in linear algebra Thus this course can also be called quotNumerical Linear Algebra 11 Model problems Data tting and interpolation Give data 111 The data can be taken from a function yt or a set of observed data We want to find a simple function zat to approximate yt so that we can predict yt everywhere Approach I Polynomial approximation Let ytza0a1ta2t2 Lntn 112 We need to find the coe icients 10111 12 an so that we can have an analytic expression n 0 constant approximation n 1 linear regression 5 e should choose the coe icients in such a way that n 2 quadratic regression A Mathematicalphysical Models Solution Techniques Real Problem Physical Laws other approach AnalyticExact lt Use Computers Z Li V V Interpret Solution Applications Visualization PrOduCtS Experiments Prediction Better Models Figure 11 A ow chart of a problem solving process Numerical Analy 5 they can match the data at the sample pointsl Thus we have ttoi a0o1t0a2t5444ant3yo tt aatalt2watnz 1 0 1 1 2 1 n 1 J1 113 t tm on altm agtfn anti gm This is a linear system of equations for the unknown coefficients Li 139 0 H n In the matrixvector form it is 1 t t2 w t a z 0 0 0 0 JO 1 t1 t 4 4 4 t M 31 114 2 1 tm tm 39 39 39 t2 an 3171 We can simply write it As b where A is an m 1 x n 1 matrix and a a0 a1 an is an n by1 column vector and 1 yo yl gm is an m1x 1 column vector We distinguish the following cases assuming that ti are distinct ti 7 tj I m gt n that is we have more equations than the unknowns The system is over detervm39ned and we can only find the the best solution for example the least squares solution Such a problem is a curvefitting problems When n 1 it is also called linear regression I m n we have the same number of equations and unknowns There is a unique solution to the linear system of equations Such a problem is called an interpolation because tat will pass all the selected data I m lt n we have fewer equations than the unknowns The system is underdetemm39ned and we can find infinite number of the solutions Often we prefer the SVD solution which has the least length among all the solutions Note that the coe icient matrix is dense not many zero entries in this application 12 A nonlinear model Not all the functions are polynomials if we want to approximate yt by yam x mm 121 Not elm possible for example where a and 3 are unknowns we would have mm 2 yo mad 91 mm m 2 3174 We get a nonlinear system of equations for a and 3 13 Finite difference method and linear system of equations The modern computers are designed in part to solve practical problems such weather forecast computing the lift and drag of airplanes missiles space shuttle etc In these application often partial differential equations particularly the NavierStokes equations are used to model the physical phenomena How to solve three dimensional NavierStokes equations is a still challenge today To solve the NavierStokes equations often finite difference or finite element methods are used In a finite difference method the partial derivatives are replaced by finite difference which is a combination of the function values Such a process is called finite difference discretization After the discretization the system of partial differential equations because a system of algebraic system of equations either linear or nonlinear We use a onedimensional example here to illustrate the idea Consider the twopoint boundary value problem fy 0 lt a lt 1 131 140 141 0 132 In a finite difference method we try to an approximate solution of 14y at a finite nunlber of points not everywhere The procedure is follows I Generate a grid For example we can select 14 equally spaced points between 0 and 1 to find the approximate solution of 141 The the spacing between two points is I1 114 and these points are xi 171 139 01 M 14 with 0 0 and 31 1 We look for an approximate solution of 14y at m1 m1 30771 Note that we know 140 14300 0 and 141 1430 0 already from the boundary condition I Replace the derivative by a finite difference formula It can be proved that 1431 It 2141 1431 It 12141 f h Z la2 135 Or 1431 It 2141 1431 It 12141 I12 14141 I12 1302 E 1304 134 Numerical Analysis I At every grid points mi 139 014 M n we use the above formula ignoring the high order terms to get u71 h 21301 ua1 lt N 1214001 for h2 N 1 14302 It 21302 2 It N 1214002 H h2 N 3 2 Mani It Emmi Mani It N 1214353 Hy h2 N 132 7 14mn1 It 214mn71 1401 1 It N d21tmn1 a 2 N 2 Mail It 1 If we replace the mquot with the quot sign and replace Mani which we do not know with U which is the solution to the linear system of equations 0 2U1 U2 h Z fa1 U 2U U Ill 22 fy2 7 2 U1 1 II 01 L U 7 2U 0 n 2 hzn l f 7n71 This system of equations I can be written the matrix and vector form W12 g U1 fwd 2 2 71 mg 2 fly392 hm g i U an 135 i h 07kg fwnizl biz Uni fmnil 8 Z Li with U0 0 and ab 02 I Solve the system of equations to get the approximate solution at each grid point I Implement and debug the computer code Run the program to get the output Ana lyze the results tables plots etc I Error analysis method error using finite difference to approximate derivative machine error roundoff errors We will discuss how to use computer to solve the system of equations Note that the coe icient matrix has special structure tridiagonal and most entries are zero In two space dimensions a useful partial differential equation is the Poisson equation quot11 quotmy 2 136 For a rectangular grid and uniform mesh the finite differential equation at a grid point 001 93 is Ui 1j UiLj Uile UM 2 g 39 x 12 my MVP 1 f1 3 7 For a general n by n grid we will have 13 I 4 1 I B I 1 4 1 IB 1 4 Note that the size of A is n 12 x n 12 If we take n 101 we will have one million unknowns A very large number that most laptop today may not be able to handle if we store all the entries For such a system of equations an iterative method or sparse matrix techniques are preferred We will get back to this later 14 Continuous and discrete eigenvalue problems Consider the differential equation 1quot Au 0 0 lt a lt 1 140 0 141 2 0 141 The solution is not unique One particular solution is 130 sink7ra 142 2For nonAhoniogeneous boundary condition we can plug in the value u and 111 in the plane of no 0 and uh Numerical Analysis I 9 corresponding to the eigenvalue A1 Imr2 for k 1 2 The significance of the eigen functions and eigenvalues is the basis of the Fourier series and theory For example for nonhomogeneous differential equation itquot fy 0 lt a lt 1 140 0 141 0 143 The solution can be expressed 00 30 2 wk sink7r 144 k1 with 1 01 fy sink7rady k z fol sinzk7ry See an ordinary differential equation text book for the details More general eigenvalue of wk this type would be pu 11 Au 0 pa 2 p0 gt 0qa 2 0 146 with a more general boundary condition The associate numerical method is the spectral method For example if we apply the finite difference method discussed earlier we would have U 71 2U U 1 Abi0 t12n 1 147 In the matrix vector form it is A7 A 1 which is an algebraic eigenvalue problem The matrix A is given in 135 15 Objectives of the class I Learn how to solve these problems mainly linear algebra e iciently and reliably on computers Method and software selection what to use for what problems Related theory and analysis Preparation for other courses such MA780 MAES4 MASS7 etc 16 A computer number system We want to use computers to solve mathematical problems and we should know the number system in a computer Z Li A primitive computer system is only part of a real number system A number in a computer system is represented by fraction exponential 7h 2 is lle 39 39 dn 38 0 S S l3 1 Smax S 5 E 8171111 161 sign mantissa base Below are some examples of such numbers 41111125 binary32 41314210 410314100 162 The choice of base number is 3 binary primitive 3 octal used for transition 3 10 decimal custom 8 convenient 3 16 hexadecimal save storage The float number is denoted by fl 31 We can see that the expression of a floating number is not unique To get a unique expression it is often designed that 11 7 0 if any is a nonzero number Such a floating number is called a normalized floating number The number zero is expressed 000 030 Note that one bite is used to represent the sign in the exponential Often there are two number systems in a programming language for a particular com puter single precision corresponding to 32 bits and double precision for 64 bits In a 32 bites computer number system we have exponential fra n 161 Properties of a computer number system I It is a subset of the real number system with finite number of floating numbers For a 32bit system the total numbers is roughly 23 max 8mm 1 1 I Even if a and y are in the computer number system their operations for example flay can be out of the the computer number system Numerical Analysis I 11 I It has the maximum and minimum numbers and maximum and nonzero minimum magnitude For a 32bit system the largest and smallest numbers can be calculated from the following the largest exponential 20 21 H 26 27 1 127 the largest fraction 01111 1 1 2 23 the largest positive number 21271 2 17014116 x 1038 The smallest number is then 17014116 X 1038 The smallest positive number or smallest magnitude is 0000 1 x 2427 242723 70064923 x 10 16 163 while the smallest normalized magnitude is 0100 0 x 2 27 2428 29387359 x 10 164 Over ow and under ow If a computer system encounter a number whose magnitude is larger than the largest floating number of the computer system it is called OVERF LO W This often happens when a number is divided by zero for example we want to compute sa but a is undefined or evaluate a function outside of the definition for example log 5 Computers often returns symbol such NAN inf or simply stops the running process This can also happen when a number is divided by a very small number Often an overflow indicates a bug in the coding and should be avoided If a computer system encounter a number whose magnitude is smaller than the small est positive floating number of the computer system it is called under ow Often the computer system can set this number zero and there is no harm to the running process The numbers in a computer number system is not evenly spaces It is more clustered around the origin and get sparser far away While a computer system is only a subset of the real number system often it is good enough if we know how to use it If a single precision system is not adequate we can use the double precision system 12 Z L1 17 Round off errors and oating point arithmetics Since computer number system is only a subset of a real number system errors called roundoff errors are inevitable when we solver problems using computers The question that we need to ask is how the errors affect the final results and how to minimize the negative impact of errors Input errors When we input a number into a computer it is likely to have some errors For example the number 7139 can be represented exact in a computer number system Thus a floating number of expression of 7r denoted fl7r is different from 7r The first question is how a computer system approximate 7r The default is the roundoff approach Let us take the cecinlal system an example Let a be a real number that is in the range of the computer number system in terms of the magnitude and we express it a normalized floating number a 0d1d2 1271211 H x 10b 11 0 171 The floating number if the computer system using the roundoff approach is 0d1d2 d x105 ard gr fax n n 172 0d1d2 1 1 x 10b if 11 2 5 171 De nition of errors The absolue error is defined the difference between the true value and the approxi mated absolute error true approximated 173 Thus the error for fl 31 is absolution error of fly a fly 174 The absolution error is often simply called the error Absolution error may not reflect the reality One picks up 995 correct answers from 1000 problems certainly is better than the one that picks up 95 correct answers from 100 problems although both of the errors are 5 A more realistic error measurement is the relative error which is defined absolution error relative error 175 true value for example if a 0 then a f M a relative error of fl 31 Numerical Analy s39 I 13 Obviously for different 31 the error a fla and the relative error are different How do we then characterize the roundofl errors We seek the upper bounds or the worst case which should apply for all For the roundofl approach we have 000 0d1 x 10b if 1211 g 4 lav Mm 000 01 dn1 x10b xfdn gs 1 g 000 05 x 10b 510 which only depends on the magnitude of The relative error is bin 1 bin 1 flwl 10 210 1 7 1 define lt 7 lt 7 710 quotJr 6 machine recision 177 m m 01 x 10b 2 I gt Note that the upper bound of the relative error for the roundofl approach is independent of 1 only depends on the computer number system This upper bound is called the machine precision or machine epsilon which indicates the best accuracy that we can expect using the computer number system In general we have 31 f la 1 lt 7 9 quot1 178 M 21 for any base 3 For a single precision computer number system 32 bits we have3 1 72 72 77 6 52 2 1192093 X 10 179 For a 64bits number system double precision we have 1 752 752 1 716 6 E2 2 2220446 X 10 1710 Relative error is closely associate with the concept of the significant digits In general if a relative error is of order 10 s for example it is likely the result has 5 significant digits An approximate number can be regarded a perturbation of the true vales according to the following theorem Theorem 171 Ify E R then fly 301 6 6 g c ia the relative error There are other ways to input a number into a computer system Two other approaches are rounding and chopping in which we have mm 0d1d2 d x 10 1711 3Accoding to the de nition the e a 12 X 10 7 which is the same as Kahaner Meier and Nash s book but it twice as larger as the result given in Dennnel s book which we think it is wrong 14 Z Li for chopping and fla 0d1d2 1 1 x 10 1712 The errors bounds are twice much the roundofl approach 172 Error analysis of computer arithmetics The primitive computer arithmetic only include addition subtraction multiplication di vision and logical operations Logic operations do not generate errors But the basic arithmetic operations will introduce errors The error bounds are given in the following theorem Theorem 172 If a and b are two oating numbers in a computer number system flaob in the range of the computer number system then flaob aob 16 0 X 1713 where 6 6ab S 6 1714 we also have fluE M1 6 1715 Note that 6 is the relative error if a o b 7 0 of the operations and is bounded by the machine precision This is because fla o b a o b 6a o b absolution error fla o b a o b Gob 5 l6 s a We conclude that the arithmetic operations within a computer number system give the bestquot results that we can possible get Does it mean that we do not need to worry about roundofl errors at all Of course not 173 Roundoff error analysis and how to avoid roundoff errors Now we assume that a and y are two real numbers When we input them into a computer we will have errors First we consider the multiplications and divisions W o y fl My 0 my fl W c w at as s as s 15xoyt lcxoyl Note that 61 69 and my are di erent numbers although they have the same upper bound We distinguish several different cases Numerical Analy s39 I lo I Multiplicationdivision o X or take the multiplication an example we have 001 WWW 51 1 5w my 1 51 5y 5w 08 3091 6 6 g 36 Often we ignore the high order term hot since they one much smaller for single precision we have 10 7 versus 10 141 Thus delta is the relative error we mentioned before The error bound is understandable with the fact of 3 with two input errors and one from the multiplication The absolute error is ay3 which is bounded by 3 errors from the multiplicationsdivisions are not big concerns here But we should acme The same bounds hold for the division too if the divisor is not zero Thus the avoid dividing by small numbers if possible I Now we consideration a subtraction o an addition can be treated a subtraction since a I a b or vise versa Now we have 9 91 5y 1 5w a y 969 a 9ch 62 The absolution error is a 3 3 mcx 359 3ch 06 y My all W6 My 3 W which does not seem to be too bad But the relative error may be unbounded because ar y My y m a W 0le m yl 1 36 9 1 I 6 lt m yl In general 61 61 even though they are very small and have the same upper bound Thus the relative error can be arbitrarily large if a and y are very close That means the additionsubtraction can lead the loss of accuracy or significant digits It is also called catastrophic cancellation illustrate in the following example 031343639 031343637 000000002 If the last two digits of the two numbers are wrong likely in many circumstance then there is no significant digit left in the result In this example the absolute error is till small but the relative error is very large 16 Z Li Roundoff error analysis summary I Use formulas fla 301 61flf1 o y a o y1 62 etc Expand and collect terms I Ignore high order terms 174 An example of accuracy loss Assume we want to solve a quadratic equation 302 by c 0 on a computer how do we do it First of all we need to write down the algorithm mathematically before coding it There are at least three methods b vb 4m b Vb 4m 2 2a 2a b v b2 4m 3 c 2 2a 301 b b2 4m 3 c 2 i 2a 301 Algorithm 1 m1 Algorithm 2 m1 Algorithm 3 m1 Mathematically there are all equivalent thus they are all call consistent But occasionally they may give very difl erent results especially if c is very small When we put select the algorithm to run on computer we should choose Algorithm 2 if I g 0 and Algorithm 3 if b 2 0 why This can be done using than conditional expression using any computer language Lets check with the simple case a 1a I 2 m2 23 e 0 When 6 is very small we have 2 2 2 e 2 m1 1 1 ez 2 1 1 e a 0 e 4 H 1 The last equality was obtained by rationalize to the denomenator 2 Below is a Matlab code to illustrate four difl erent algorithms function Eyly2y3y4y5y6y7y8 quaderre y1 2 sqrt4 4e2 y2 2 sqrt4 4e2 y3 2 sqrt4 4e2 y4 eys y5 2 sqrt4 4e2 y6 eyS y 4e2 sqrt4 4e2 y8 ey7 Numerical Analysis I 17 From input various 6 we can see how the accuracy gets lost In general when we have 3 210 we will lose about k significant digits 175 How to avoid loss of accuracy I Use different formula for example 4a b b2 4m if I gt 0 I vbz 4m I Use Taylor expansion for examples 2 4 2 1v1 1i i m a7 cosy 24 2 m n M are m if m 4 4 4 I Another rule of thumb for summations fl2lyi We should add those numbers with small magnitude first to avoid quotlarge numbers eat small numbers 18 Some basic algorithms and Matlab codes I Sum 22 mi s0 initialize for i1n s s ai A common mistake is forget the s here end I Product 121 30 s1 initialize for i1n s s ai A common mistake is forget the s here end Example Matrixvector multiplication y In Matlab we can simply use 1 4 k Or we can use the component form so that we can easily convert the code to other computer languages We can put the following into a Matlab m file say testAxm with the following contents 18 Z Li n100 Arandnn xrandn1 7 Generate a set of data for i1n yi 0 initialize for j1zn yCi yCi ACijxj 7 Use to compress the outputs end end We wish to develop eflicient algorithms fast less storage accurate and easy to pro gram Note that Matlab is case sensitive and the index of arrays should be positive integers can not be zero 181 Hornet s algorithm for computing a polynomial Consider a polynomial p 31 and annlmn l 4 4 4 am to How do we evaluate its value at a point We can store the coe icients first in an array say 11 a24 4 4 am 1 since we can not use 10 in Matlab then we can evaluate pa using p aCI for i1n p p ai1x i1 end The total number of operations are about nzQ multiplications and n additions Howere from the following observations mm 13303 Lsz my a0 3013302 027 a1 a0 mma3ma2a1ao we can form the Hornets algorithm pseudocode p an for in110 p xp aCi endfor which only requires n multiplications and additions Chapter 2 Vector and matrix norms Vector and matrix norms are generalization of the function of absolute value for single variable There are at least two motivations to use them I Error estimates and analysis How do we measure flx x if x is a vector or flA A when A is a matrix I Convergence analysis for iterative methods How do we know a vector sequence xk converges or not 202 De nition of a vector norm Give a R space Rquot 9 x mn1 A vector norm is defined a multivam39able function fx of its components m1 m2 31 satisfying 1 fx 3 0 for Va 6 R and fx 0 if and only ltgt if 0 2 ux 3 fx y 3 fx y It is called the triangle inequality If a function fx satisfies 1H3 we use a special notation fx 1131 and call this function a norm in B An example of a vector norm Let fx lxggxg lwill 19 20 Z Li Is a vector norm I It is obvious that fx 2 0 If x 0 then all xi 0 so max md 0 that is 0 On the other hand if max m 0 That is the largest magnitude is zero which means all the components have to be zero We conclude x 0 I ux lmaltx max M max af x 7177 199 199 fXY l 1 li1illl g l dl s 1x111 11211 fx y Therefore x is a vector and it is called the infinity norm or maximum norm It is denoted fX Hxlloo Are the following functions vector norms I fx 5 No since f0 7 0 m s I Whig since ax af x and ax has no definition for 31 those vectors whose first component is zero n 12 I x Yes it is called 2norm or Euclidian norm it is denoted i1 x The sketch of the proof is given below Sketch of the proof for 2norm Proof 1 and 2 are obvious The triangle inequality is n 12 n 12 n 12 zen2 g zyz or i1 i1 izl n n n n 12 H 2 fof1fl zx ny2Zgf 29 or i1 i1 i1 i1 i1 R R R i1 i1 i1 Numerical Analysis I 21 The last inequality is the CauchySchwarz inequality To prove this inequality we consider a special quadratic function M w A132 0 2A E1112 Ey i1 i1 1 n R R bgtagt2 a 32 b 2 min t 7 1 The function g is a nonnegative function and the quadratic equation ggt 0 has at most one real root or no roots Therefore the discriminant should satisfy 2 4m 3 0 that is R 2 R R 4Zmizi 4Zy 2mf 0 i1 i1 izl This is equivalent to R R i1 izl 2 2 S 2 i1 i1 There are different CauchySchwarz inequality in different space for example L2 space R 7 This concludes R Emiyi i1 n Zara 31 11 Sobolev space Hilbert space etc The proof process is similar A special case is 17 1 for which we have R i1 R 12 R 12 R i1 i1 i1 203 1norm and LP norms T6 The function 2 is a vector It is called 1norm 2 1 1 An example Let x 2d find Hprp 1200 22 Z Li The solution is Hle 6 llxlloo 5 IXHZ NOW that we haw llxlloo S S Hle This is true for any x In general we can define the pnorm for p Z 1 n 117 qu1 z w i1 204 Some properties of vector norms I There are infinity number of vector norms not unique I All vector norms are equivalent in nite dimensional space That is give any two vector norms HxHa and Hng there are two constant Gay and tag such that wilxlla S HXHa S Ca llxlla Note that from the above inequality we immediately have llxlla s M s T uxug Note that the constants are independent of x but may depend on the dimension n Thus the one two and infinity norm of a vector norms are all equivalent What are the smallest C and the largest c constants Theorem 201 2 mm s Hsz s llwllm 3 mm 3 WWW As an illustration we prove tz S HXHI S llxllz Proof R 2 R HXHi W 2 WV 2 millle i1 i1 iltj Hle 2 Z willw39jl 2 bill iltj On the other hand from the CauchySchwarz inequality we have already known that M1 Dawsm fo llxllz Numerical Analy s39 I 23 205 Connection between the inner product and x2 The inner product of two vectors x E B y E R is defined R X Y Zm r i1 Particularly if y x we have R R LY 230117 HXHZ i1 i1 From CauchySchwarz inequality we also have 12 W 12 lltxygtzm s D 29 HXH2HyH2 i1 2 1 Matrix norms There are two definitions of a matrix norms The first one is to use the same definition a vector norm De nition A matrix norm is a multivariable function of its entries that satisfy the following relations 1 Z 0 for VA E Rmxn and 0 if and only if A 0 2 fmA MM 3 B 3 It is called the triangle inequality If a function fA satisfies 13 we use a special notation fx HA and call this function a norm in Rm Or alternatively we can treat a matrix a long vector then use the definition of the vector norm For an example if A E Rmm my if we treat the matrix a long vector either rowwise or columnwise the Qnorm now it is called Frobem39us norm of the matrix is HAHF 211 For an n by n identity matrix we have 1111 J77 instead of HI 1 we may have expected Since matrices are often used along with vectors 651 Ax b Am it is naturally to define a matrix norm from a vector norm 24 Z Li 211 Associated induced subordinate matrix norms Theorem 211 Given a vector norm H H in R space the function of the matrix function in A E Rmm space de ned below M 21 iii W W is a matrix norm in Rm space It is called the associated or induced or subordinate matrix norm Proof It is obvious that IfA 2 0 If A 0 then Ax 0 for any x so Ax 0 thus f A 0 On the other hand if f A 0 then we must have A 0 Otherwise there would be one entry of A is nonzero a7 7 0 We would conclude the vector Aej which the jth column of A is a nonzero vector since one of component is L73 Therefore HAejH 7 0 and If A Z HAejHHe l Z 0 which contradicts the fact that A 0 I I HanH I My I I quu I I 39 fgt 4 Kiliglal ml I For any x from the property of a vector norm we have A 131 13 3 Ax 1113 Thus we have IX AF lell IX HAXH HBXH 11x11 3 M M l lt max Ax max Bx X950 HXH Xi HXH S fA fB 212 Properties of associated matrix norms from the definition of associated matrix norms we can conclude the following important properties HIM 1 This is obvious since max Hij xii HXH I g for any x It is obviously true if 0 Proof If x 7 0 then we have I IXuAyu quu 4 1M Z M Multiplying to both sides we get Z AXH Numerical Analysis I 25 I AB 3 for any A and B that AB exists Proof According to the definition we have HABXH lt n HAHllBXH 11x11 S A HBXH HXH HAM 1131 213 Some commonly used matrix norms For any vector norm there is an associate matrix norm Since we know how to evaluate HxHP p 1 2 so we should know how to evaluate HAM well It is not practical to use the definition all the time Theorem 2 1 2 llAlloo max 2 laljlt 2 lazjlt 39 t 2 laijlt t 2 lamjl 11X laijl 2 13 j1 j1 jl j1 i1 AHI max Gill2 aizlw H 17 2 MM 2 mfo am 214 1 i1 In other words we add the magnitude of element in each rows then we selected the largest one which is the infinity norm of the matrix A An example Let 5 0 7 12 A 1 3 1 5 0 0 1 1 6 3 9 Then HAH00 12 HAHI 12 Proof for norm Let T6 A maxil mm 3 The proof has two parts a show that HAHOOH g M b find a specific xquot Hx loo 1 such that 2 A For any x HxHOO g 1 we have 2 j 23102339 I R n n AX n IIUlSL Za m gzm m gzmi gM ijl GU j1 j1 j1 ZELI amj 26 Z Li Now we conduct the second step The largest sum of the magnitude has to be one or more particular rows ith row We choose signa7gt1 b 1 a gt 0 signaig2 xquot signa 0 a 0 msigna 1 a lt 0 signa7gtn Thus we get x n i n 1114x4100 11 2 to 2 as M jl j1 x where X means some number That completes the proof Theorem 213 1114112 2 max11A A1A2A A 1Ai14 14 1An14 14 max AJAHA where AJAHA i 12 H n are eigenvalues of AHA A the conjugate transpose of A The proof is left an exercise Note that even if A is a complex matrix AHA or AA are symmetric semipositive definite matrix A symmetric semipositive definite matrix B B B big satisfies the following I For any 1 XHBX Z 0 Note that if B is a real matrix then XHBX Z1 Injecimj All eigenvalues of B are nonnegative An example Consider the following matrix 1 1 A 0 4 We have HAHOO 5 HAHI 4 To get HAHZ We have 1 0 1 1 1 1 ATA 1 4 0 4 1 17 Numerical Analysis I 27 Thus detOxI 4quot A A 1 17 1 0 or A2 18A 16 0 whose roots are A 18 l V182 642 We conclude that HAHZ V 9 V65 a 41306 We can see that HAHZ is more dif cult to get However if A is symmetric that is A A then we can show that HAHZ illaxiA if A is symmetric 7 Furthermore if detA 0 that is A is invertible then HA l if A is symmetric and detA 0 min Ai A Z Li Chapter 3 Solving a linear system of equations Direct method In this chapter we discuss some most used direct methods for solving a linear system of equations of the following form Ax b A 6 RMquot b e Rquot mm 7 0 301 The condition of detA 7 0 has the following equivalent statements The linear system of equations A I has a unique solution A is invertible that is 214 exists Almost all the direct methods are based on Gaussian elimination A direct method is a method that returns the exact solution in nite number of operations with exact compu tation no roundoff errors present Such a method is often suitable for small to modest dense matrices The main idea of Gaussian elimination algorithm is based on the following observation Consider the following upper triangular system of equations all 112 am 1 In 022 am 302 3 2 E 302 am an I From the structure of the system of equations we can I From the last equation ammn I we get you lagam 29 Z Li I From the last but second equation annwnlmnnl annwmn mil we get mnnl Emil annwmn annmnl Note that we need the result mm from previous step I In general the idea of induction assume we have computed 1 mnnl 741 from the ith equation iffy397 Laws3 ammn 1 we get N xi In 2 may l i 7L 1 4 H 1 303 ji1 Equation is called the backward substitution A psuedocode is given below for i n 11 R 301 bi 2 aim M17 ji1 endfor Below is a matlab function of the backward substitution function Ex backwaranAb for in11 xi 131 for ji1zn xi xi aCijxj end xi xiaij end In one step we can count the number of operations In ith step there are n multiplications and one division and n subtractions Thus the total number of multiplications and divisions is n 71 1 n2 12H n4L 4170m The total number of addtionssubtractions is nn 1 2 77 2 3 001 12Hltn 1 The total cost is only about one matrixvector multiplications which is considered to be very fast Numerical Analy s I 304 Derivation of the Gaussian elimination algorithm The main idea of Gaussian elimination GE is to use a row transforms to reduce they system to an upper triangular one while keep the solution unchanged For this purpose we can apply the Gaussian elimination to the coef cient matrix or to the augmented matrix which is defined A E b that is the matrix is enlarged by a column First we use a 4 by 4 matrix to illustrate the idea We use the number to indicate the number of the times that the entries have been changed and the sequence of changes 00000 00000 0000E0 0111 1 2 2 0000i0 0111 1 0000E0 0111 1 00000 00000 0111 1 0111 1 2 2 00222 00222 00222 00033 We can see that we need n 1 step to reach this goal In order to do these steps we multiply a sequence of simple matricides to the augmented matrix or original one Ln1Ln2wL2L1 A E b 304 We need to derive the recursive relations so that we can implement the algorithm The general procedure is to derive the first step maybe second step if necessary the general step to see if we have a complete algorithm For convenience we denote the right hand side 32 Z Li 39139 b a1gtn1a2gtn1w amn we set L1Ab 1 0 0 0 all 112 am 1 0149 121 1 0 021 022 am 02min l31 0 1 0 i i i i 0 lm 0 0 1 an 1792 04m annl all 112 am 1 alnl 2 2 2 0 22 a min 2 aij 0 We need to derive the formulas for 7 1 and a We multiply the 2nd row of L1 to the List column of Ab A to get 1 121L111 LEI 0 2121 all We multiply the 3rd row of L1 to the List column of Ab All to get 1 131L111 5311 0 2131 all In general we multiply the ith row of L1 to the List column of Ab A to get 1 a z a 1 1 a 211 2 123 n all So we have general formula for L1 Now we consider the general formulae for LEE 139 2 3 n j 2 3 n n 1 We multiply the ith row of L1 to the jst column of AM A to get W Mimi 03 0532 a a11 U 1 a 1 13 11 l7 1L i23wn j23 nn1 Numerical Analysis I Let us do one more step LZLIAW 2 L224 2 A 1 0 0 0 1 1 s 1 11 12 In I VH1 0 1 0 2 2 s 2 0 a1 at all 0 l2 1 0 I I I I 0 2 2 2 0 12 0 1 0 W W I awn n 1 1 1 s 1 11 12 113 I VH1 2 2 s 2 0 22 alt I akin 3 10 0 Since the formulas only depend on the indexes we just need to replace the formulas using the substitutions 1 2 2 and 2 2 3 to get the following formulas lt21 112 i 2 39 22 i34 n j34 nn1 Since there are n 1 steps in kstep we should have 0 as litk1i721 ik1 n alt kk k ltk1 k a k k a aij Visas ik1 n jk1 nn1 Important observations of the Gaussian elimination process 2 1 I After we get 1 we do not need 1 anymore We can overwrite to save the storage I We do not need to store zeros below the diagonals Often we store ljj Z Li Below is the pseuocode using the overwrite for k 1 n 1 for i k 1 n aik am 7 akk for j k 1 71 ii i 01 aikakj end end end 305 What can we get after the Gaussian elimination I An upper triangular system all 112 am 1 lama 022 am 1 02min 305 amt amnl which can be solved using the backward substitution I A matrix factorization of A LU where L is a unit lower triangular matrix and U is an upper triangular matrix 1 0 0 0 all 112 am G21 1 0 L22 am A L31 132 1 0 E LU 306 0 an am am amnil 1 u The determinant of A is detA 11022 am detU 307 This is obvious since detLi 1 and the property of detAB detAdetB The GE method breaks down if one of the diagonals is zero Numerical Analysis I 3a Sketch of the proof of LU decomposition from GE process From Lnianiz 39 39 39 LleA U we have A 14sz 1321141quot U Lf Lgl L12L11U It is easy to check that 1 0 0 0 1 0 0 0 121 1 0 2 1 0 L 131 0 1 0 2 0 1 0 308 0 E 0 1m 0 0 1 3 0 0 1 In other words we just need to change the Sign of the nonzero column below the diagonal It is also easy to show that 1 0 0 0 121 1 0 LT LE 131 132 1 0 309 0 1m lnz 0 1 In other words we can simply add the nonzero colmrms together Note that this is not true for L 1L 1 Repeat the process leads to the desired LU decomposition 306 Operation counts for the Gaussian elimination algorithm As part of analysis we need to know the cost of an algorithm or the total number of operations needed to complete the algorithm For the GE algorithm we count the number of operations in the kth step then add them together In kth step of GE algorithm we need to compute 0quot any 1 tk1 n akk 013 1 My Imam ik1wn jk1 nn1 36 Z Li We need a k divisions for computing Likka and a kn k 1 multiplications for updating the modified elements Thus the total number of multiplicationsdivisionsl is n kn k 2 for k 1 2 n 1 Thus we have the table for the number of multiplications divisions k 1 n 1n 1 k2 it 2M n 1 143 The total is nil nil nil 1n2n 1 n3 ElsieF2 2 k2 2kquot nn 1 o kzl kzl Fl J 6 Note that the first term is actually the cost if we apply the GE process to the matrix A the second part if the same transform applied to the right hand side which is equivalent to the forward substitution We often emphasize the order of operations which is n33 The constant coefficient 13 is also important here The number of additionssubstractions is almost the same When n 1000 the total number of operations is roughly 2 1093 about a billion which is a quite large number It is true that for large and dense matrix the GE algorithm is not very fast 307 Partial pivoting strategy to avoid breakdown and large errors When a 0 the GE algorithm breaks down even if A is nonsingular For example we consider Ax b where 0 1 c 1 A let A 1 0 A 1 0 i 1 1 0 The Gaussian elimination fails for the first A for the second one in general we will have 1 H a a am a gaw 611 62 1 63 My 2701 01j63 We can see that if mm is very small the roundoff error will be amplified The element a is called the pivot element IUsually we put the multiplication and divisions into one category and additionssuhtractions into another category An operation in the rst category often take slightly longer time than that in the second category Numerical Analy s39 I 37 Partial column pivoting strategy Before the Gaussian elimination we exchanges the row of the augmented matrix or the original matrix such that the switched pivot element has the largest magnitude among all of the elements in the column Below are the steps of the 1st Gaussian elimination with column partial pivoting algorithm I Choose a such that alllzlaillt i1t2t39 tn This can be done using 1 1 piv0t absa11 for i2zn if absa11 gt pivot pivot absa11 l 1 end end I Exchanges the lth row with the 1st row at lt gt LU j 1 2 n n 1 This can be done using for j1zn1 tmp a1j a1j alj a1j tmp You should be able to see how to change the variables I Carry out the Gaussian elimination usual Below are some example 1 2 3 1 4 5 No pivoting is necessary 1 1 0 1 2 3 3 1 0 2 4 5 l gt 2 4 5 38 Z Li 308 The matrix expression of partial column pivoting and elementary permutation matrix The partial column pivoting algorithm can be expressed row transform well 1 1 1 1 An elementary permutation matrix Pi is the matrix obtained by exchanging the ith and jth rows of the identity matrix for example 0 0 1 P13 0 1 0 1 0 0 If we multiply P0 to a Matrix A from the left the new matrix is the one obtained by exchanging the ith and jth rows of the matrix A for example 0 0 1 1 2 3 3 1 0 0 1 0 2 4 5 2 4 5 1 0 0 3 1 0 1 2 3 Similarly if we multiply Pij to a Matrix A from the left the new matrix is the one obtained by exchanging the ith and jth columns of the matrix A 309 Properties of an elementary permutation matrix B 71 I I and I By that is Hg is an orthogonal matrix I I dedPij 1 3010 PA LU decomposition The Gaussian elimination with partial column pivoting algorithm can be written the following matrix form Ln1PR2Ln2 LZPZLIPIA U 3010 Numerical Analysis I 39 Can we put all Li s and together to get a PA LU decomposition by moving to the right and Li s to the left Yes we can with some modification We need to exchanges rows of We can write LHPHLWT L2 1P2P1A U LmPHLniz EZLRJQPIA U Below is a demonstration how this can be done for P2414 L1P24 1000 1000 1000 0001 100 1001 P2414 010 010 010 0100 1001 001 1000 1000 1100 0001 1 L1P24 010 0010 wlr MN 001 0100 We can see that when we move P0 to the right passing through Lk we just need to exchanges the two rows in the nonzero column below the diagonal Finally we will have PA LU The determinant of A can be computed using the following detPA detLdetU 01102 um or detA 1munuzz um where m is the total number of row exchanges Below is a pseudocode of the process for k 1 n 1 n 1th elimination 0p WM 139 k for i k 1 n if am gt ad then i i a 2 Wild end end for j k 1 or n at am Z Li aw aim mi 2 at end begin Gaussian elimination process Finally we have the PA LU decomposition Below is an example how this process can be used This example can be used in the debugging process 3011 Solving A b using the PA LU decomposition Once we have the PA LU decomposition we can use the decomposition to solve the linear system of equation follows H Step 1 Form b Pb that is exchange rows of b This is because from A b we get PA Pb 2 Use forward substitution to solve Ly Pb This is because we have PA LU Pb 3 Use the backward substitution to solve U y to get the solution Numerical Analysis I 3012 An example of Gaussian elimination See the scanned pdf le 42 Z Li 31 Other pivoting strategies I Complete pivoting At the first step we choose mm max owl In this approach we m have to exchange both rows and columns which makes the programming more di icult It may also destroy some matrix structures for certain matrices The improvement in the accuracy is marginal compared with partial column pivoting I Scaled column pivoting If the matrix A has very different magnitude in rows this approach is strongly recommended At first step we choose Lug max Gill 311 5k lgign 57 Th where of 2 owl F1 32 Error Analysis When we input vectors or matrices into a computer number system we are going to have roundoff errors the error satisfy the following relations Min bi1 61 W S 6 mm b Eb HEpr S 1le 6 p 1200 flog 0M1 013 ml S 6 flA A EA HEAHP S HAHN 1 00 In general from the equivalence we have HEbH S 011113116 HEAHp S 02114116 321 for any vector and matrix norms where Cl Cg are two constants depend on n Even before we solve the linear system of equations we are solving a different problems A Ego b Eb due to the input errors The question is how the errors affect the results This is summarized in the following theorem Theorem 321 If HA IEaH lt 1 or HA IHHEaH lt 1 a stronger condition de ne 5 A lb 65c A EA l b Eb A lb then W HM Manama Mir HAM ule 22gt or the following if we ignore the high order terms m 4 MEAN f f M suAuuA MW W 325 Numerical Analysis I 43 We can see that HAHHA IH is an important amplifying factor in the error estimate it is called the condition number of the matrix A crmdA HANNAquot H 324 For Qnorm it is also denoted MA cow12 HAHZHA 1 2 To prove the main theorem we need the Banach s lemma Lemma 321 If HEM lt 1 than I E invertible and 7 1 HUEgt 11 m Proof Consider the following matrix series 00 13 I E E2 E3 1kEk Zemw k0 It partial sum R 13 I E E2 E3 H 1 Equot Z 1quot Ek k0 satisfies HBnH lt 1111 1 EH HEZH 4 4 4 H 1quotEquotH 1 HEMquot 1 1111 HEH Hle 444 HEMquot W 2 m Let B limn oo B then B is the inverse of I E from the following IEBn 1E 1 EE2 E3w 1REW r Equot1 1 Furthermore 110 EM s 1111 HEM HE1V 44 HEM Now we prove the main error theorem H xll 71 71 M Awe ltbEtgt A bu 3 MA E10403 Eb A E14AquotbH 3 AU Ailm wb Eb b EAXBH 11A 1A1EAquotEb EAXBH s HAW W AquotEAquotHHEbll EA ngH HAHN S mm HEM 11mm 44 Z Li Notice that EA HA IEAH S HA7 EA S HAM HA7 A Thus we continue to get 116x11 HAM HAW Eb 1le11 S 1 11411 1147 NAHUM HAM Since b AXE we have HAHHXBH we arrive at 116x11 coma M um 11x51 5 1 condA bu HAM If we ignore high order terms in the above inequality we can go one step further 6 condA HEM 1 condA EA 0 condA HEAH2 4 4 HXaH HbH HAM HAM HAM 22 h I Eb HEAH con13 i This conlpietes the proof Remark 321 The relative errors in the data either both A or and b are ampli ed by the factor of the condition number condA Usually the upper bound overestimated However the upper bound attainable The condition number condA has nothing to do with any algorithm It only depends on the matrix itself However if condA very large in reference to the machine precision then no matter what algorithm we use in general we can not expect to good result Such a problem called ill conditioned matrix or simply ill conditioned For an illconditioned system of linear equations a small perturbation in the data A and b will cause large change in the solution If condA small or modest in reference to the machine precision the problem then called well conditioned Numerical Analy s I 45 33 Wilkinson s Backward roundoff error analysis There are various errors during a problem solving process for example modelling errors input error flA algorithm error truncation errors and roundoff errors We have as sumed to start with a mathematical problem and wish to use computer to solve it therefore we will not discuss the modelling errors here Using Gaussian elimination method with partial pivoting there is no formula error with partial pivoting that is why it is called a direct method So we only need to consider round off errors Roundoff error analysis is often complicated J H Wilkinson made it a little simple His technique is simple the roundoff errors can be regarded a perturbation to the original data for example flay 31 g1 6y1 63 all 34 g1 35 37 g In other words the computed result is the exact sum of two perturbed numbers of the original data For Gaussian elimination method for solving As l we have the following theorem due to l H Wilkinson Theorem 331 The computed solution of the Gaussian elimination algorithm on a com puter for solving As l the exact solution of the following system of linear equations AEw b 331 where HEH S 19001121116 The function gn called the growth factor that de ned below max max awn k 1lt71jltn gn 332 law l The growth factor gn 2 1 We also can prove the following I For Gaussian elimination algorithm without pivoting gn 30 indicating the method may break down I For Gaussian elimination algorithm with partial column pivoting gn 2 This bound is reachable Proof For Gaussian elimination algorithm with partial column pivoting we have H r it t 39 39 L I aij aij all a k Since ng g 1 we conclude that we M 3 aff g maxag maxag g Emaxmf g 2 x Emax g 2kmaxa 7y 7y 7y 7 7 46 Z Li This concludes that gm 3 2 quot The following example shows that such a bound is attainable 1 1 1 1 1 1 1 0 1 2 1 1 1 1 0 0 1 22 1 1 1 1 1 0 0 239H However the matrix above is a specific one the general conjecture is that for most reasonable matrices gm w a 331 Factors that affected the accuracy of computed solutions For most of computational problems the relative error of the computer solution X approx imating the true solution x5 satisfies the following relation HXC Xe xgH That is three factors affect the accuracy 3 crmdpmblem galgoritltm a 333 I The computer used for solving the problem characterized by the machine precision The condition number of the problem For a linear system of equations A b it is crmdA The algorithm used to solve the problem characterized by the growth factor 9 34 Residual vector and error estimates In the error estimate 11441 is involved But we know it is di icult and expensive to get 214 Do we have a better way to estimate how accurate an approximation is The answer is the resisual vector De nition 341 Given a system of linear equations Act b and an approximation 5 the residual of sea de ned as 91 b Azca 341 If detA 7 0 and 9xa 0 the xa is the true solution We can use HrxaH to measure how xa is close to the true solution x5 A lb Note that rxa is called computable since it just need matrixvector multiplication and does need Aquot Numerical Analy s39 I 47 Example Let 1 1 0 0 1 A 2 0 1 b 1 XI 1 3 0 2 0 1 Then the residual vector of xa is rxa b Aa 2 o How far is the residual from the relative error The answer is given in the following theorem Theorem 34 1 HAHHwaH 3 an S A l M 342 In other words if we normalize the matrix A such that 1 then the difference is about the condition number of A Note that the residual vector is the gradient vector of the function fx xTAx when A is a symmetric positive definite matrix It is the search direction of the steepest descent method in optimization and important basic concept in popular conjugate gradient CG method 35 The direct LU decomposition and Gaussian elimination algorithm for special matrices For some situations and various considerations we may not need to have pivoting process in the Gaussian elimination algorithm This can be done using the direct LU decomposition 351 The direct LU decomposition Assuming that we do not do the pivoting then we can have the direct A LU decomposi tion that is we can have closed formulas for the entries of L and U where L is a unit lower triangular matrix and U is an upper triangular matrixz Tridiagonal or banded matrices 2r we can have U to he a unit upper triangular matrix and L is a lower triangular one The formulas and algorithm are slightly different I Column diagonally dominant matrices I Symmetric positive definite matrices 352 Direct LU decomposition The direct LU decomposition of a matrix A without pivoting is equivalent to the Gaussian elimination process It has more value in theoretical purpose and in deriving other algo rithms for example the incomplete LU decomposition in optimization In the direct LU decomposition we get the entries of L and U directly according to certain order We can write 1 U11 U12 Um 21 1 H22 Wu ALU 131 132 1 7m n1 n2 Th il 1 To derive the formulae for the entries of L and U the order is very important The order is follows I We can get the first row of U using the matrix multiplications first of row of L and any column of U a1j1u1j 2 my L1j j12 n We have the first row of U We multiply the ith row of L to the first column of U W1 an lilull 2 1 12 U11 i2wn Assume we have obtained the first k 1th rows of U and first k 1th columns of L we derive the formula for kth row of U and kth column of L If we multiply the kth row of L to jth j 2 k column of U we get k7 k7 ij 2 likuik 1919 2 1ij ij 2 likuik k 4 H n i1 i1 After we get the kth row of U we can get the kth column of L by multiplying the ith row of L to kth column of U 1H am Z lijujk i1 k4 lik 2 lijujk likukk 2 la i k 1 n j1 Wk Numerical Analy 36 Tridiagonal system of equations The coef cient matrix A of a tridiagonal system of has the following form 11 I31 0392 12 I32 1 d3 I33 an in One particular application is the system of linear equations derived from the finite difference method Another application is the alternating directional implicit ADI for solving two or three dimensional partial differential equations PDE using dimension by dimension approach If we use the direct LU decomposition of the tridiagonal matrix we see we get a very simple decomposition d1 I31 1 1391 I31 1 2 12 32 gt52 1 32 0393 d3 33 0 3 1 if 33 an 14 xi 1 1 In other words only two diagonals need to be changed From the matrixmatric multiplication and the order of the direct LU decomposition it is easy to derive a 1 1 1 if a 374 from rig4 d 14 I I 11 11 1391 from ag d 15 1quot I 41 PH d This decomposition is called the Crout factorization in some references Using overwriting we can have the following pseudocode for i 2 n 391 1 Vi1amp7 di 1 di Vi71 end Once we have the LU factorization we can easily derive the formulas for forward and backward substitutions for solving Ax b The forward substitution is 91 1 for i 2 n 91 bi Wig1 71 end The backward substitution is 97 a 7 R d for i n 1 1 1 2 97 3in 13 1 1 end The entire process Crout decomposition forward and backward substitutions quires Em multiplications divisions The solution process sometimes is called chasing method for solv ing the tridiagonal system of equations 361 Strictly column diagonally dominant matrices If a matrix A 2 am satisfies the following conditions R 2 law lt lajjl j 13w m 361 il j For example the matrix is the left below is strictly column diagonally dominant while the second is not 5 0 1 5 0 1 1 2 3 2 4 1 3 1 7 4 0 5 If a matrix A is a strictly column diagonally dominant matrix then it is obvious that no pivoting is necessary in the first step of Gaussian elimination process because Elan lt ail all Numerical Analysis I 51 The question is how the next step or thereafter We have the following theorem answers this question Theorem 361 Let A be a strictly column diagonally dominant matrix After one step Gaussian elimination A is still a strictly column diagonally dominant matrix Proof n lt2 lt1 all lt1 lt2 2 lain 13 2711a1j ltlajjl i24 j i24 j an n n l 1 1 1 as 1 laml lall 1 s 2 a j H Z 1 aij am i2g j i2g j 11 lanl lt1 lt1 15 lt1 lt1 1 1 a1j1 Ti au 1a all lt1 L lt a my all lt1 lt1 1339 lt1 lt2 3 an Tua lanl all This completes the proof 362 Symmetric positive de nite matrices and Cholesky decomposition A matrix is called a symmetric positive definite SPD if the following conditions are met 1 A A or am a 2 For any x 7 0 XHAX gt 0 Note that the second condition has the following equivalent statements which also give some way to judge whether a matrix is an SPD or not I Ai A gt 0 i 1 2 n that is all the eigenvalues of A 6 RR are real and positive I All the determinants of the principal submatrices are positive 52 Z Li A principal submatrix Ak is the matrix composed from the intersections of the first k the rows and columns of the original matrix A for example an 012 013 all 112 A41 011 A42 13 G21 022 123 1 n A 021 022 m G32 0 33 If A is an SPD then so are 1 is Note also that if A is an SPD then m gt 0 i 1 2 n that is all the diagonals are positive This is because if we take x ei then x Ax m gt 0 Examples Are the following matrixes ar symmetric positive definite matrices 4 1 1 2 1 0 1 0 1 1 2 2 1 1 8 0 1 2 The first one is not since all lt 0 or 122 0 For the second one we have A AT and detAl 2 gt 0 detAz 4 1 3 gt 0 and detA detA 8 2 2 4 gt 0 so A is an SPD 363 Cholesky Decomposition LLT We prefer to use the A LLT instead of LU decomposition to take advantage of the symmetry and positiveness The decomposition is an analogue to the square root of a positive number We can save half of the cost and the storage compared to that the standard Gaussian elimination process applied to a general matrix Note that for SPD matrices no pivoting is necessary If 1 then the growth factor gn S and is well under control Therefore no pivoting is necessary Now we derive the formula for A LLT decomposition the process is similar to the direct LU decomposition We can write 111 111 121 1m 2 22 122 lnz ALLT 131 132 133 RR n1 n2 until 7m To derive the formulae for the entries of L the order is very important The order is follows Numerical Analysis I I We can get the first column of L using the matrix multiplications 011111111 2 lu I We can get the rest of the first column of L using the matrix multiplications first of row of L and any column of LT a1jlnl 2 lj1 j2g 7lw We have the first column of L I Assume we have obtained the first k 1th columns of L we derive the formula for kth column of L If we multiply the kth row of L to kth column of LT we get k4 mm 2 lkjlkj lgk 9 lkk 31 After we get the first element in the kth column of L we can get the the rest of kth column of L by multiplying the ith row of L to jth column of LT 1H H am lijlkj am Z lijlkj liklkk 2 1 j k 1 H n i1 quot Pseudo code of A 2 LL for k 1 n forik1n k 1 lik am Z lijlkj Him 1 end end Use the Cholesky decomposition A LLquot to solve Ax b From A b we get LL 2 b The forward and backward substitutions are the following 1 Solve Ly b 2 Solve LT y 54 Z Li The number of multiplications divisions needed for the Cholesky decomposition is 7136 The storage needed is 00122 So we just need half the storage and half the computations The only disadvantage is that we need to evaluate square roots A slightly different version is to get the A LDLT decomposition which may work for any symmetric matrix assuming there is no break down not guaranteed if A is not an SPD 364 Software issues I In Matlab we can use a Ab for solving Ax b Lu luA for LU decompo LL sition the A for Cholesky decomposition of I for determinant of crmdA2 for example to find the condition number of A in 2norm Free subroutines using Fortran C and C on netlib wwwnetliborg including Linpack blas collection of linear algebra blas basic linear algebra subroutines slatec sparse and many others in both single and double precision There are many books on the programming of different methods A popular one is Numerical Recipes in Fortran C Pascal Matlab Reference Matlab commands are in this font 0 Matrix 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 A12345 678910 1112131415 1617181920 or you can replace the spaces between elements in a row by commas A1234567891o1112131415161718192o or you can replace the semicolon by a line break A1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 The brackets at the beginning and end of the matrix are rectangular An upper case A is not the same as as lower case a Matlab treats them as different variables 0 Matrix dimensions The matrix A is 4 X 5 sizeA 4 5 Number of rows size A 1 4 Number of columns sizeA2 5 Larger of the two dimensions lengthA 5 This is helpful for counting the number of elements in a row vector or a column vector 0 Matrix elements Element 12 A12 2 Element 27 5 A2 5 or A2 end Element 43 A43 or Aend3 Note the round parentheses 0 Rows and columns Row1ofA A1 1 2 3 4 5 Row4ofA A4 or Aend ColungOfA A2 2 7 12 17T Column 5 of A A5 or A end o Submatrices 2 3 4 Rows 123and columns 2 3 4 A1324 7 8 9 12 13 14 Rows 1 2 and column 1 3 5 A12 1351 g 150 Rows 3 and 4 A34 Columns 1 2 3 5 A 135 0 Matrix operations Addition AB Scalar multiplication 3A Multiplication HE 0 Special matrices eye5 5 X 5 identity matrix zeros32 3 X 2 zero matrix zeros sizeA zero matrix of same dimension as matrix A o Diagonal elements T diagA 1 7 13 19 When the input is a matrix diag extracts the diagonal elements ie all elements in position jj and puts them into a column vector 1 diag 1350 50 When the input is a vector diag puts the elements of the vector onto the diagonal of a matrix diagdiagA 13 19 extracts the diagonal of a matrix and removes the offdiagonal elements 0 Triangular parts 1 2 3 4 5 7 0 7 8 9 10 um 0 0 13 14 15 Extracts the upper triangular part all elements in position ij with j 2 i and sets the elements below the diagonal to zero 1 0 0 0 0 7 0 0 0 11 12 13 0 0 16 17 18 19 0 Extracts the lower triangular part all elements in position i7j with i 2 j7 and sets the elements above the diagonal to zero tri1A Permutations A 2 1 3 end permute the two leading columns of A Aend1 1 permute the rows of A in reverse order Conjugate transpose A Piecing together matrices 1 0 0 0 0 0 0 eye2 zeros25 1 301080 0 1 0 0 0 0 0 130 40 50 60 70 80 Norms normA two norm of A7 where A is matrix or vector normA17 normA inf one and in nity norms of A condA two norm condition number of matrix A condA17 cond A inf one and in nity norm condition numbers of square matrix A Linear system solution xAb solution to AI b

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.