### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Numerical Analysis MATH 3176

UNCC

GPA 3.73

### View Full Document

## 30

## 0

## Popular in Course

## Popular in Mathematics (M)

This 10 page Class Notes was uploaded by Mrs. Dangelo Fahey on Sunday October 25, 2015. The Class Notes belongs to MATH 3176 at University of North Carolina - Charlotte taught by Alexandros Sopasakis in Fall. Since its upload, it has received 30 views. For similar materials see /class/228904/math-3176-university-of-north-carolina-charlotte in Mathematics (M) at University of North Carolina - Charlotte.

## Reviews for Numerical Analysis

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/25/15

Introduction to Scienti c Computing1 Math 3176 Spring 2009 Instructor Alexandros Sopasakis Iterative methods for AX B As the name denotes we will now attempt to solve the system AX B with an iterative scheme instead of a direct method The difference is that the solution produced by any of the direct methods presented in the previous section is exact In contrast as is often the case with any iterative scheme their solutions are not exact but only approximations up to a given tolerance near the true solution Iterative techniques on the other hand are quite useful when the number of equations to be solved is large ie the size of the matrix is large Furthermore iterative methods tend to be stable and therefore small initial errors do not pile up during the iterative process thus blowing up in the end Jacobi and GaussSeidel methods The most basic matrix iterative scheme is the Jacobi iteration It is based on a very simple idea solve each row of your system for the diagonal entry Thus if for instance we wish to solve the following system 42 liillil we rst solve each row for the diagonal element and obtain 3 5 x 7x 7 1 4 2 4 2 6 2 gml B Thus the Jacobi iterative scheme starts with some guess for 951 and 952 on the right hand side of this equation and produces after several iterations improved estimates which approach the true solution x In matrix form the system above can be written 0571 i 0 34 as 5 4 x31 25 0 x314 65 2 where you can clearly see how the iteration is progressing Your previous estimate for the solution xm l goes in the right hand side and you obtain a new estimate which is supposed to be better in the left hand side Let us examine the output 1DEPARTMENT OF MATHEMATICS AND STATISTICS7 University of North Carolina at Charlotte Math 3176Spring 2009 page 2 of the Jacobi scheme for a few iterations based on this numerical example We will assume that the initial guess is taken to be without loss of generality x0 0 OF n 0 5 10 15 20 Exact Sol 95 0 2907500 3063965 3071030 3071404 30714285714 x3 0 2318000 2422670 2428302 2428557 24285714285 A very simple but effective improvement has been suggested to the Jacobi scheme It is the Gauss Seidel method which simply uses the new value of 951 in the second row of Thus the Gauss Seidel method for the example above is 9571 i 0 34 0571 54 lxrl 125 o 65 lt3gt Let us similarly compare a small number of iterates using the Gauss Seidel method in the following table n 0 5 10 15 20 Exact Sol 95 0 3056675 3071392 3071428 3071428 30714285714 x3 0 2422670 2428557 2428571 2428571 24285714285 Generalization of iterative methods We will now generalize our ndings and produce a general theory under which to study iterative schemes In order to do this we create a set up which formates the problem using a splitting 7 technique with an auxiliary matrix Q to be speci ed later Let us rst outline our general set up We start as usual from the main system of equations in matrix form Ax B 4 where as we have seen before A and B are known while x denotes the vector of the unknowns First we bring the Ax term to the left hand side 0 AX B and then we add an auxiliary vector Qx on both sides of 4 and produce the following system QXQ4XB 5 This new system will be used in order to de ne our iteration as follows me Q AWN B 6 The iterative process can now be initiated with a given initial vector x0 for the solution This is usually only a guess but if any information is known about the solution should be used in obtaining a better initial such guess for x0 Given this set up the only and most important thing left to do is choose a matrix Q so that the iterative process outlined in 6 will Math 3176Spring 2009 page 3 o converge to the true solution X o produce the solution in a small number of iterations We have already seen a couple of iterative methods which did ful ll these tasks in one way or another The Jacobi and the Gauss Seidel iterative schemes both can be written in the general form outlined here Note that in fact for Jacobi iteration Q diagA for Gauss Seidel Q is the lower triangular part of A It is important to know when we can expect to have a solution of ls it possible to always have a solution of this iterative scheme The answer is naturally nol We develop below a result which indicates whether we should expect our iteration to be successful or not First we observe that in fact the theoretical solution of 5 is simply found from x Q1Q Ax Q lB 1 Q lAx Q lB The iterative scheme corresponding to this set up is Xm me l C where G I Q lA and C Q lB We rst de ne what we mean by convergence of an iterative method De nition 1 An n x n matrix A is said to be convergent if liirgoAmij 0 fori7j 127n Further the following holds Theorem 2 The following are equivalent 0 A is a convergent matrix 0 lim7H00 0 o limmn00 Amx 0 for every X o pA lt 1 Then the following theorem gives a very useful result Theorem 3 The iterative scheme Xm me l 0 converges to the unique solution ofx GX C for any initial guess X0 if and only if pG lt l Math 3176Spring 2009 page 4 Proof Subtracting X Gx C from xm me 1 C we obtain xm X I Q7114xm 1 x Simply taking norms on both sides of the above we have me x1 Q lAXXm l x s I Q 1Agtllllxm 1 x Applying this inequality repeatedly for m l m 2 we obtain 3 IIU C2 V1Xm 1 Xll S IIU Q V12Xm 2 Xll llxm X IIU Q lAHImIKXO Xll Thus clearly from the above if we assume that pG lt 1 based on Therem 2 we have 11111 IIU Q lAHlm 0 and llxm X 0 Thus convergence We leave the opposite direction of this proof to the reader since it follows from this outline However this proof is in fact instructive in terms of answering other interesting questions such as how many iterations of the Jacobi iteration are necessary in order for the solution to be found within a given tolerance Let us look at such an example Example Find the number of iterations so that the Jacobi method starting from the vector x0 0 OF will reach the solution with a relative error tolerance of 10 4 for the following matrix A 3 4 A l 1 a l Solution We need to approach the solution x using the Jacobi iteration xm me 1 C 7 Suppose then that the solution is x Then iteration 7 has to be satis ed for the solution x as follows x GAX C Math 3176Spring 2009 page 5 Subtracting these two equations and taking norms we obtain llxm 95H S llGllllxm 1 X Repeating this in 1 more times we obtain llxm 95H S llGllmllX0 X Note however that for this problem we have chosen X0 0 OF Thus the above becomes Xm 95H S llGllmllxll or Xm 95H X We make the assumption that in fact the spectral radius pG is approximately equal to note that in fact this is true if G is symmetric and we use a Euclidean norm Thus for pG m we obtain llGllm IIXm frll 3 MW X Note that in fact the left hand side is nothing more than the relative error Therefore in order to nd out how many iterations are necessary in order to approach the solution within a relative error tolerance of 10 4 we must solve for m the following equation pltGgtm 10 4 We solve this and obtain the following inequality for m 4 len10 41n102271 lnpG ln23 Thus if we choose in 23 we should be within 10 4 of the true solution for this system Let us now look at some theoretical results for each of the methods presented so far Theorem 4 fA is diagonally dominant then the sequence produced by either the Jacobi or the Gauss Seidel iterations converges to the solution of AX B for any starting guess X0 We outline the proof here only for the Jacobi iteration since the Gauss Seidel is similar Proof Note that the Jacobi iteration matrix G can be written as LU G D 1LU D Math 3176Spring 2009 page 6 In that case taking the matrix norm of the above we get IIGII L f Ulloo mama71 Em IAltmgtI lt 00 llDlloo maxigign where the last inequality holds simply by the de nition of A being diagonally dom inant There are several other methods which can be constructed to converge to the solution of our system Richardson s is one of them where we choose Q to be the identity matrix I Let us look at an overall outline of the main iterative methods so far Suppose that we can write A as AD L U where as usual D is diagonal and L U are lower and upper triangular respectively with zeros in their diagonal Then the iterative methods can be described by the following general system xm me 1 G where G G and Q are given below in terms of the known matrices A and B for each scheme 0 Jacobi G D 1L U and G D lB 0 Richardson G I A and G B o Gauss Seidel G D L 1U and G D L 1B In terms of speed you should always keep in mind that iterative direct or other methods always depend on the problem at hand For instance each iteration using either the Gauss Seidel or Jacobi method requires about 712 operations However if you are solving a small size system of equations then Gaussian elimination is much faster Take for example a small 3 x 3 system If you perform a Jacobi iteration on it you will require about 9 operations per iteration and you may need to perform more than 100 iterations to obtain a very good estimate Thus a total of at least 900 operations On the other hand Gaussian elimination only needs to perform 33 27 operations to solve the whole system and produce the exact solutionl In fact it can be shown that iteration if preferable to Gaussian elimination if lne n lt 8 lnp 3 Here 71 corresponds to the size of the matrix A p refers to the spectral radius of the iterative scheme p E pG and e is the given relative error tolerance we wish to obtain from the iteration Math 3176Spring 2009 page 7 Let us look at a simple example of this result Example As usual we wish to solve the matrix system Ax B Suppose that A is a 30 x 30 matrix and that the spectral radius for the Gauss Seidel iterative scheme is found to be pG 4 Suppose also that we wish to nd the solution accurate to within 6 105 Is it best to perform the Gauss Seidel iteration or just simple Gaussian elimination Solution Note that for this example ln 6 1 15 1 i 1256 lnpG 91 Therefore inequality 8 gives 1256 lt 10 Not truel Thus in this case Gaussian elimination is actually going to be fasterl One thing to keep in mind is that in fact there are matrix systems for which one method might converge while the other might not the reason usually being that the spectral radius of the iteration is not less than 1 So speed is not everything when it comes to iterative schemes Let us outline some important points about these methods and compare them with other techniques 0 Gauss Seidel is faster than Jacobi o Gauss Seidel and Jacobi methods have a cost which is about 712 operations 0 One iterative scheme may converge to the solution while another may not This may depend on the choice of initial guess x0 but more importantly on the spectral radius of the iterative scheme pG lt 1 0 Gaussian elimination although it costs about 713 operations may be faster when it comes to moderate size systems Let us see the pseudo code for some methods Jacobi Suppose that we are provided with a matrix A a vector B and a starting guess vector x 1 For 139 1 to n do x0z39 2 While a given tolerance e is satis ed do the following Math 3176Spring 2009 page 8 o For1ltondo 31 221 AQEJ39WU 211A171Yj A1 1 21 o For 1 l to n do 3 Print out the vector Z GaussSeidel Suppose that we are provided with a matrix A a vector B and a starting guess vector 0 x 1 For 1 l to n do x01 2 While a given tolerance e is satis ed do the following o For 1 l to n do the following two steps 31 211 AQJWU 211A171Y1 Z A1 1 m 21 3 Print out the vector Z However we can in fact come up with methods which can converge to the solution under appropriate conditions even faster The Successive Over Relaxation SOR method We are now going to examine advanced iterative schemes for our matrix system SOR stands for Successive Over Relaxation Similarly SSOR stands for Symmetric SOR These schemes are much faster than the ones presented so far but also much more involved computationally They rely on an extra 16101101751011 or acceleratwn parameter 0 lt w lt 2 the value of which can determine the speed up of the method However the exact choice of this relaxation parameter is not always clear from the beginning When w 1 this method is in fact exactly the same as the Gauss Seidel method When w lt l the method is also called successive under relaxation and then w gt 1 this is the successive over relaxation Let us be more speci c though The SOR type methods explore what is the best value of the acceleration pa rameter w so at to achieve faster convergence The speed up is improved by making sure that the spectral radius of the matrix G is as small as possible This is where the value of the parameter w come handy It is chosen so as to reduce the spectral radius as much as possible Math 3176Spring 2009 page 9 Based on our previous general set up where we split A D L U the SOR and SSOR our found from the following iteration methods Xm me71 C where for o SOR G D wL 1l MD WU and O wD wL 1B Once again the idea although very dif cult is to choose to in such a way so as to minimize pG Let us look into some results about general convergence of these methods De nition 5 A matrix A is said to be Hermitian if the following property holds A A where the notation A denotes the conjugate transpose of the matrix A That is A AT Since here we will only deal with real matrices the term Hermitian is equivalent to just only 7 A AT Theorem 6 If A is a positiue de nite Hermitian matrix and Q is nonsingular the 0 SOB method for 0 lt w lt 2 converges for any initial guess as Again we omit the proof of this theorem since it is not within the scope of this class The following result can now be proved although we omit the proof here Lemma 7 Suppose that A is a symmetric positiue de nite tridiagonal matrix Sup pose also that the iteration matrix using Gauss Seidel is Gag Then the optimal ualue ofw for the SOB method is giuen by 2 w l xl pGGS In general though the SOB method diuerges ifw lt 0 or w gt 2 A simple example can be helpful in understanding the difference in speed of convergence for different choices of w Example Suppose that we wish to solve the following system AX B using the SOR method for the optimal value of w based on the previous Lemma WM Math 3176Spring 2009 page 10 Solution According to the Lemma the optimal value of 01 based on the Gauss Seidel iteration is found by rst calculating the spectral radius of Gag The iteration matrix Gas for the Gauss Seidel method is Gos DL1U lt13 Note that the spectral radius of GgD is calculated to be pGgD As a result the optimal value of w is 2 W 102 1x1 112 We try different values for w and check how many iterations are required in order to reach the exact solution X 10 11T with tolerance of 10 7 starting from the same initial guess of x0 10 01 a 101 05 06 07 08 091 1lterations1180 35 28 23 20 171 11 12 13 14 15 1 11011 115120 28 43 77 gt2001 Let us see the pseudo code for SOR Successive Over Relaxation Suppose that we are provided with a matrix A a vector B and a starting guess vector x 1 For 139 1 to n do x0z39 2 While a given tolerance e is satis ed do the following o Forz391tondo 3139 21A07JWU m Yz 0 A11 1 3 Print out the vector Y

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.