Numerical Analysis MATH 3176
Popular in Course
Popular in Mathematics (M)
This 13 page Class Notes was uploaded by Mrs. Dangelo Fahey on Sunday October 25, 2015. The Class Notes belongs to MATH 3176 at University of North Carolina - Charlotte taught by Alexandros Sopasakis in Fall. Since its upload, it has received 45 views. For similar materials see /class/228904/math-3176-university-of-north-carolina-charlotte in Mathematics (M) at University of North Carolina - Charlotte.
Reviews for Numerical Analysis
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/25/15
Introduction to Scienti c Computing1 Math 3176 Spring 2009 Instructor Alexandros Sopasakis Direct methods and matrix factorization The main problem which is encountered in several research applications relating to matrix algebra is AX B 1 where A and B are known matrices while X is the matrix of the unknowns A variety of problems can be transformed into this set up and therefore being able to solve such a system effectively is very important There are several techniques that can be applied in solving When the size of the matrices is suf ciently large the idea of simply nding the inverse of matrix A if it exists is no longer attractive for several reasons Simply though the computer resources required for such a calculation are great Thus alternate more effective techniques are used in practice In early algebra classes we have already encountered elimination and pivoting methods such as Gauss Jordan method or Gaussian elimination and backward substitution These methods require 713 operations for matrices of size n x n Thus as the size of the matrix increases the cost in operation skyrockets lf instead we produce a factorized version of A then we can reduce the operational cost of solving for X from n3 to 112 This translates to almost 99 reduction in calculations assuming that the matrices are larger than 100 x 100 not unusual for applications nowadays However to factor out A we do require 713 operation anyway The only bene t though is that once the matrix A has been factored then we can solve any general system AX B with 712 operations only LU factorization Doolittle s version In the rst method which we examine we will factor matrix A into two other matrices A lower triangular matrix L and an upper triangular matrix U such that A LU The idea for solving system 1 will be as follows Starting from 1 AX B we let a new variable Y be Y UX so that AX LUX B becomes LY B 1DEPARTMENT OF MATHEMATICS AND STATISTICS7 University OF North Carolina at Charlotte Math 3176Spring 2009 page 2 Since L is a lower triangular matrix this system is almost trivial to solve for the unknown Y s Once we nd all the values for Y then we can start solving the system U X Y Note that this system is also very easy to solve since U is an upper triangular matrix Thus nding X with this method is no big deal The only think left to do is actually nd the matrices L and U with the required properties for which A LU This is accomplished by the usual Gaussian elimination method which is applied only up to the point of obtaining an upper triangular matrix without the back substitution Let us look at a simple example Example Solve the following matrix system using an LU factorization 1 2 3 x1 0 3 4 1 x2 6 1 0 1 x3 1 Solution The main part will be to produce the LU factorization Once this is done then solving the system will be easy To produce the factorization we start with the usual Gaussian elimination method For ease in notation we denote by R each row of A Then to create zeros below the element a11 we simply do the following 3R1 R2 gt R27 1R1 R3 gt R3 1 2 3 0 2 8 0 2 2 Last we create zero below a2 2 via 1R2 R3 gt R3 1 2 3 0 2 8 0 0 6 This procedure remarkably has already produced our required matrices L and U from A In fact the matrices are 100 1 2 3 ALU 310 0 2 8 111 0 0 6 Do the multiplication to check the result Note that U is simply the resulting matrix from our Gaussian elimination while L is consisting of 1 s in the diagonal while the Math 3176Spring 2009 page 3 other entries come from the coef cients by which we multiplied in order to create U So given L and U we can solve easily the original system as follows rst solve LY B 1 0 0 yl 0 3 1 0 32 6 1 1 1 33 1 Top down you can almost read the solution as 31 0732 6 and 33 5 Now you can solve the second part which is U X Y for X 1 2 3 x1 0 UX 0 2 8 x2 6 0 0 6 x3 5 This time the solution is read from the bottom up as 951 116x2 13 and 3 The following pseudo code outlines this procedure Pseudo code for LU H lnput matrix A and the diagonal elements of L ie ones 3 Let u1 1 a11l11 lf 1 1u1 1 0 then LU factorization is not possible 7 and STOP OJ For j 2 n let u1j a1jl11 and 1039 1 aj1u11 4 Forz3923n 1 do 0 Let I I 71 I I 12397 0 97 Zk1 11 Z 17 17 1 lf lz z uz 0 then STOP Print 7 Factorization is not possible Forjz391n A Let um mm 2111lt239kgtultkjgt MM 7 Let 1m ltajz 2111jkuk uz39z39 Let unn ann lnkukn If 71 nun n 0 then The factorization exist A LU but A is a singular matrixl 07 m Print out all L and U elements Once you have the factorization then you can solve the matrix system with the following very simple substitution scheme Pseudo code for solution of AX B Math 3176Spring 2009 page 4 1 First solve LY B 2 Fori 127ndo uwm MJuu ab 3 Now solve U X Y by back substitution in exactly the same way 4 Forinn 171 do ltMu ZLMMJWWgtMO There are a couple of results which are interesting since they give us an incite at when these methods work The following de nition is necessary rst De nition 1 The n x n matrix A is said to be strictly diagonally dominant if mam gt Z aij for all i 12n i The results comes indirectly through Gaussian elimination Theorem 2 A strictly diagonally dominant matrix A is non singular Furthermore Gaussian elimination can be performed on any linear system of the form Ax B to obtain its unique solution without row or column interchanges and the computations are stable with respect to the growth of round o errors When can we perform LU decomposition The following theorem gives the answer Theorem 3 If Gaussian elimination can be performed on the linear system AX B without row interchanges then the matrix A can be factored into the product of a lower triangular matrix L and an upper triangular matrix U where A LU More relevant results follow in the next section There is another type of factorization which is in fact very similar to this LU or Doolittle s decomposition The alternate factorization method also produces an LU decomposition where U being a unit upper triangular matrix instead of L This is called Crout s factorization Naturally either factorization will do the job and producing one or the other is a matter of taste than anything else You can change the provided pseudo code very easily in order to produce such a factorization Math 3176Spring 2009 page 5 LDLT and LLT or Choleski s factorization We continue here by presenting more methods for factoring A All the techniques presented similarly to the LU decomposition of A are of the same operation cost of 713 overall As the name denotes an LDLT type factorization takes the following form 4LDLT where L as usual is lower triangular and D is a diagonal matrix with positive entries in the diagonal Similarly the Choleski factorization AL consists of a lower and upper triangular matrix where neither of which have 1 s in the diagonal in contrast to either Doolittle s or Crout s factorizations It is very easily to construct any of the above factorizations once you have an LU decomposition of A Let us look at the equivalent factorizations for the following matrix 60 30 20 A 30 20 15 20 15 12 Using our pseudo code we obtain the following LU decomposition of A 1 0 0 60 30 20 ALU M210 0 5 5 13 1 1 0 0 13 Now the equivalent LDLT decomposition consist of the following three matrices 1 0 0 60 0 0 1 12 13 ALD 1 10 05 0 0 1 1 13 1 1 1 0 13 0 0 1 Note how the new upper triangular matrix has been obtained by simply dividing each row of the old upper triangular matrix with the respective diagonal element Now that we have the LDLT factorization we can also easily obtain the equivalent Crout s factorization 60 0 0 1 12 13 1 30 5 0 0 1 1 20 5 13 0 0 1 Note here that the new lower triangular matrix is constructed by simply multiplying out matrices L and D Last the Choleski decomposition is also easily constructed Math 3176Spring 2009 page 6 from the LDLT form above by simply dividing the diagonal matrix D into to ma trioes D and multiplying out LVE to produce a lower triangular and LT to produce an upper triangular matrix A LEVELT 1 0 0 60 2 3 60 0 0 1 12 13 12 1 0 0 5 4 0 5 0 0 1 1 13 1 1 0 0 0 0 0 0 1 won 0 woBM This is the LLT form of the matrix A Let us now look at results regarding when we can perform most of these factor izations We will rst need the following de nition 60 0 0 60 60 60 d5 0 De nition 4 A matrix A is positive de nite if it is symmetric and if xTAx gt 0 for every x y 0 Thus based on this de nition the following theorem holds Theorem 5 If A is an n x n positive de nite matrix then the following are equiv alent A is nonsingular lt lak7l mam lamgt1 aii gt 0 for alli 127n a2ltmgt lt awem for mom 1 Recall that one of conditions for a matrix to be nonsingular is that detA y 0 Further Theorem 6 A matrix A is positive de nite if and only if any of the following hold 0 A LDLT o A LLT Introduction to Numerical Analysis1 Math 3176 Spring 2009 Instructor Alexandros Sopasakis Iterative Methods in Matrix Algebra In this chapter we address the general problem of solving a system of linear equations AX B where both A and B are known matrices while X corresponds to the unknowns Numerically this system can be solved using an iterative scheme However before we discuss how exactly to solve such a system numerically we must review and at times introduce methods and new notation which will become useful while unraveling this new topic of iterative schemes So we start with a short overview of matrix algebra 1 Overview of Matrix Algebra Usually one of the most important tools in understanding whether two objects are close is that of the norm If the said objects are vectors then we use the vector norm to compute their distance Let us de ne this tool De nition 1 A vector norm denoted by H is a function which has vectors as input and produces a number as output In other words E R gt R The vector norm has the following properties a Z 0 for all x E R b 0 if and only if x E 0 c for all h E R and x E R d X Y S X Y for all X7y E R There are several different kinds of vector norms We most often use the following types of norms in our calculations De nition 2 The Euclidean or l2 norm and the maximum or l00 norm llxlloo 22in 1DEPT OF MATHEMATICS AND STATISTICS UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE Math 3176Spring 2009 page 2 or the l1 norm TL X1 Z lfril i1 Example Calculate the l1 Euclidean and maximum norm for the vector x 1 3 2 Solution The l1 norm is X1 1 3 2 6 the Euclidean norm is X2 2x3 M1 32 22 x i1 while the maximum norm gives llxlloo gig lxzl 3 Therefore using these norms we can now describe distances between vectors as fol lows TL 2 1122 0r X YIloo Egg In yil As a result we are now in position to understand the concept of convergence for vectors De nition 3 We say that a vector sequence Xn0 converges to another vector X in R with respect to given norm if given any 6 gt 0 there exist an integer Ne such that Xn X S e for all n 2 N6 The following result can now be shown Theorem 4 The sequence of vectors Xn0 converges to X in R with respect to the 00 norm if and only if lim for all i 12 n Similarly we can prove the following ordering between different norms Theorem 5 For X E R we have llxlloo S X2 S x llxlloo Math 3176Spring 2009 page 3 There are a couple of well known inequalities which apply to vectors The triangle inequality X Y S X Y and the Cauchy Schwarz inequality TL E mil2 i1 S X2IIYI2 Example Verify the Cauchy Schwarz inequality for the following vectors X 123 and y 021 Solution According to the Cauchy Schwarz inequality we have for the left hand side 0 4 3 7 while for the right hand side xl 4 9M0 4 145 70 m 836 Thus the right hand side is bigger as expected We now generalize these results First we de ne the following general p norm Xp VP lfr1p lfrnlp Thus the following two inequalities hold Holder ineq XTy S Xpyq where 1 Minkowski IIXYHP S llfllpllyllp l l q in Here XT denotes the transpose of vector X Now that we have put together the basic necessary tools regarding vectors we can start discussing how to handle matrices Matrices are just rows of vectors As such we can essentially use the same tools from vectors apply them to matrices So the matrix norm is de ned via De nition 6 Suppose that A and B denote n x n size matrices and k a constant Then the matrix norm is the real ualued function has the following properties a All 2 0 b 0 if and only if A E 0 c for all k 6 IR d HA Bll S All NB 6 IIABII S IIAIIIIBII Math 3176Spring 2009 page 4 Based on the already presented vector norms we can in fact de ne new matrix norms The following result can be shown Theorem 7 If is any uector norm then A max llAmll HXH1 is the corresponding induced matrix norm Therefore we can now refer to the following types of matrix norms A2 max IIAX2 0r llAlloo max llelloo HXH21 Hxlloo1 Note that in fact these norms are not very easy to calculate since you must examine all possible vector of length 1 and nd the maximum possible result We present instead a result which allows us to easily calculate one of these matrix norms Theorem 8 Suppose that A is an m x n matrix Then the A00 norm is calculated as m llAlloo lazyl Similarly the A1 is found from Tl A11Igggtglaul Let us look at an example using this norm Example Suppose the following matrix is given Calculate Solution The norm is simply found by simply adding all rows and obtaining the maximum result l on 12 22 4 Thus A00 max3 4 4 Math 3176Spring 2009 page 5 11 Eigenvalues eigenvectors and condition number As you noticed it was not easy to obtain the matrix norm for some cases such as A2 For this kind of norm we can develop another method which will allow us to obtain the value of the matrix norm in an easy way This method relies on eigenvalues and eigenvectors First some de nitions De nition 9 Suppose that A is an n x n square matrix Then the following poly nomial pA detA AI is the so called characteristic polynomial Note that pA is an nth degree polynomial De nition 10 Suppose that the characteristic polynomialp as de ned above The zeros ofp are called eigenualues for the matrix A If for a given eigenualue A we have that A AIM 0 with x y 0 then x is called an eigenuector corresponding to the eigenvalue A Let us look at a simple example on how to obtain the eigenvalues and eigenvectors for a given matrix A Example Suppose the following matrix is given 1 1 A l 2 4 l Find the eigenvalues and eigenvectors for A Solution To nd the eigenvalues we solve the following polynomial 1 A4 A20 The polynomial simpli es to A2 5A 6 0 and the solutions are A1 2 and A2 3 To nd the corresponding eigenvectors we must solve the following matrix system for each eigenvalue 1 A 1 U1 7 0 2 4 A U2 7 0 where here 1117112 represented the unknown eigenvector For A1 2 we must solve the following system 1 1 U1 7 0 2 2 U2 7 0 Math 3176Spring 2009 page 6 which gives that U1 U2 Thus an eigenvector for A1 2 is 1 1 Similarly the eigenvector for A2 3 is 1 2 De nition 11 The spectral radius pA of a given matrix A is given by M max IAI whereA um p d to the 393 ofA Now we are nally in position to provide an alternate method for evaluating the matrix norm A2 based on the eigenvalues of A Theorem 12 Suppose that A is an n x n matrix Then VPWA A2 pA S for any matrix norm minIAI IIA 1II2 AAA COMM VVV Let us see this result in more detail through a numerical example Example Given the following matrix A compute the A2 norm 0 1 A 12 ll Solution Let us outline this procedure We rst calculate AtA and then nd its eigenvalues then according to the theorem above the square root of the maximum eigenvalue will be equal to Thus we start by rst calculating t i 0 2 0 1 i 4 2 A A T l 1 1 2 1 T 2 2 The eigenvalues for this matrix are A1 3 5 and A2 3 V5 Thus A2 V 3 V5 Another very important quantity for matrices is the condition number The condition number of a matrix describes how good 7 that matrix is For instance matrices that are not invertible have condition number 00 which is considered as bad as possible On the other hand the smaller the condition number the better the matrix behaves in our calculations The condition number is de ned through De nition 13 Suppose A 6 R717 and that A is nonsingular Then the condition number ofA is given by 00ndAAllllA 1ll Math 3176Spring 2009 page 7 Note that if we use the Euclidian norm then the condition number is given from the eigenvalues of A T condltAgt A2A 12 Ll min W In fact if A is symmetric the following is true max CODCWD A2A 12 min
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'