### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Num Meth Engr MATH 371

UM

GPA 3.87

### View Full Document

## 8

## 0

## Popular in Course

## Popular in Mathematics (M)

This 31 page Class Notes was uploaded by Mrs. Preston Lehner on Thursday October 29, 2015. The Class Notes belongs to MATH 371 at University of Michigan taught by Robert Krasny in Fall. Since its upload, it has received 8 views. For similar materials see /class/231486/math-371-university-of-michigan in Mathematics (M) at University of Michigan.

## Reviews for Num Meth Engr

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/29/15

18 Thurs 116 1 chapter 4 computing eigenvalues e value frequency 7 growth rate 7 energy level e vector normal mode 7 principal component 7 bound state m We assume that A is symmetric This implies that the e values A7 are real and the e vectors 17 form an orthonormal basis7 ie 17qu 07 l7 and any x can be written as a linear combination of the qi omit We further assume that 1 gt 2 gt gt 2 1 g39Alt 1 2 fAO detlt2 1 A13 Ax3x gtlt 2 1gtltx1gt3ltx1gt l 2 2 2 choose x1 17 then 2 x2 37 l 2952 3952 gt 952 l 2 32212 43A 3O 1 l normalizezq1lt1gt IIQ1ll21 A21Axx gtlt 2 1gtltx1gtltm1gt l 2 2 2 choosex1l7then2 x2l7 l2x2x2 gt 9521 normalize qg gt q22 1 7 check qqu20 g obvious approach for computing e values form fA detA AI 7 solve fA 0 by the methods of chapter 2 g 1 0 16 0 7 7 Ai lt0 1 7 Ailt 0 1 gt perturbedmatrix fA1 2 A2 2Al gt A1 fA le A1 e A2 2A1 62 gt A lie This shows that the e values of the perturbed matrix are close to the e values of the original matrix this is true for symmetric matrices in general However7 note that a small change of size 62 in the coef cients of the characteristic poly nomial leads to a relatively large change of size 6 in the roots Hence the roots of fA depend very sensitively on the coef cients g Wilkinson A diagl7 27 7 20 20 fAW 1 2 Aquot3920 A Zak k0 20 set 5 akl 1040616 7 6k 6 071 random 7 p Z ELkAk 7 roots 7 k0 Matlab plot zeros120 o hold on for i11OO r roots poly120 ones1211e 1Orandn1721 plot r axis0 25 6 6 end 6 l 6 0 5 10 15 20 25 This example emphasizes that the roots of the characteristic polynomial can depend very sensitively on the coef cients7 even if the e value problem for the matrix is well conditioned Hence7 solving fA 0 is not a suitable approach for computing e values in general 19 Tues 1111 3 def Given any x y 0 de ne RAW T Rayleigh quotient x 95 note T T 61qu q My 1ForxqRAq 17 J 1 do 61139qu 2 For x m qj RAW is an approximation to AJ and we can derive an error estimate by Taylor expansion First recall some notation A Mm flta1a2gt gamma a1 mm a2 foefaVfax aOIIx a2 7 W Now apply this to the Rayleigh quotient RACE BMW VRAWJ39 39 fr qj 00le qjllg xTAx i 05 VxTAx xTAx VxTx 952 VxTx WW 05 20512952 205T T 7 7 2 2 A i 172lt011 012 1 7 allxl 201212 2052 012 022 952 View V 7 05 21111 20122720121 20222 05 2AxT xTAx 205T i 2 VBAlt gt xTx2 xTxAxT RAmxT 2 VRAqj ltltquT RAqjqu 2 ij quJT 0 J J RAW M qj2 quadratic appoximation section 41 power method idea 1 Av A21 algorithm 110 given 7 v02 1 2 for k 12 3 w Ava this can be done e iciently if A is sparse 4 110 w this is done to avoid over owunder ow 5 A06 vkTAvk m Assume that 1 gt 2 gt gt lynl and qlTvm 0 k A W A o i gt 7 I ll A1 i 1 A 216 Then HM iq1 oA The l depends on sign of A1 pi 710 alql 012 Othn 7 Where 04239 LIZTwo Ulk kAkUm 5k Otihlflh 042l2 q2 anAqungt k A2 Is An k 5M1 011Q1 012 12 Otn qn A1 A1 lf qlTvm 07 then the scheme converges to 127 A2 g note The power method has some limitations 1 it only gives the largest e value A1 2 110 7 AU converge linearly and the convergence factor A 2 may not be small A1 section 42 inverse iteration idea apply power method to A 1 7 A p 1 7 7a m 1wA 1v gt Awv7wA 7u1 1v gt A pwv 2 14 AiQi gt 171 A1qi The largest e value of A 1 is A7317 so the vectors 110 converge to qn 3 A qu Az mg gt A 04612 W M 1qz The largest e value of A p 1 is J 7u 17 Where A is the e value of A closest to 7117 so the vectors 110 converge to q algorithm 110 given 7 v02 1 2 for k 1727 3 solve A 7aw vow eg LU factorization 4 110 wllwll2 5 AU vkTAvk converges to A m Using a suitable shift 7117 any e value of A can be obtained and the rate of convergence can be accelerated section 30 review of linear algebra 011951 012952 39 39 39 ain in b1 a211 022952 39 39 39 02719571 b2 system of linear equations for 951 xn anlxl aan2 39 39 39 annxn bn We can write the system in 3 other forms TL 1 Zaijxj b 239 l n 139 row index j column index j1 an 012 39 39 39 aln 951 b1 2 021 022 39 39 39 0271 952 i b2 aml am 39 39 39 amn xn bm 3 14b AaZj basic problem Given A I nd 05 solution x bA no but 05 bA does work in Matlab thm The following conditions are equivalent H The equation Ax b has a unique solution for any vector b D A is invertible ie there exists A 1 such that AA 1 I detA y 0 ka The equation Ax 0 has the unique solution x 0 OT The columns of A are linearly independent m The eigenvalues of A are nonzero note 1 If A is invertible then x A lb because then Ax AA 1b AA 1b Ib b but this is not the best way to compute x in practice 2 There are two types of methods for solving linear systems direct methods and iterative methods We will begin with direct methods 6 Thurs 918 section 31 Gaussian elimination for solving Ax Z First consider the special case in which A is upper triangular 011951 012952 ain in b1 022952 02719571 b2 anilmilxnil anilmxn bnil annxn bn Then xn bna7m xnil bnil anilnxnanilnil 951 b1 012952 alnxnall back substitution 1 xn bnann 2 forz39n l 1l 3 sum b2 4 for j 139 1 n 5 sum sum a27 95 6 xi sumail operation count divisions n Inults adds 1 n2 n N n2 for large n pf i nil rnultsl2n l ZiS 2391 nil nil n71 n71 23 Zi 201 1 Zin Z Znnn l 2391 2391 2391 2391 gt S l g Hence the leading order term in the operation count for back substitution is 112 note Similar considerations hold if A is lower triangular note In the general case we use elementary row operations to reduce Ax b to an equivalent upper triangular system and then apply back substitution to nd 05 l multiply an equation by a nonzero number and subtract from another equation 2 interchange two equations g n 3 011951 012952 013953 b1 021951 022952 023953 52 031951 032952 033953 53 l an 012 013 1 b1 021 022 023 1 b2 l 031 032 033 1 53 step 1 eliminate variable 951 from eqs 2 and 3 021 m21 a7 gt cm gt 022 77121012 mm is called a multiplier 11 023 gt 023 77121013 b2 gt b2 m21b1 031 77131 af gt 032 gt 032 77131012 11 033 gt 033 77131013 53 gt b3 77131391 L quot7 these elements have changed gt 033 gt 033 77132023 53 gt 393 13252 agg agg l b2 upper triangular 21 2l 122 30 223 l 2 l 0 l l l 2 l l 0 m21 l2 0 l 2 l 1 77131 2 l 0 l l 0 32 1 1 12 0 1 2 3 1 m327 13223 2 1 0l 1 0 32 1 1 12 0 0 43 l 43 general n x 71 case reduction to upper triangular form 1 forkln l 2 forikln 3 mm aikakk 4 for j k l n 5 aij aij mik akj 6 bibimik bk note The element akk in step k is called a pivot In the previous example the pivots are 2 g g these are the diagonal elements in the last step operation count The leading order term comes from line 5 kl gt 2n l2ops k2gt2n 220ps H Magzw kn 2gt2220ps k1 6 kn lgt2lgops 23 Hence the operation count for Gaussian elimination is gn 7 Tues 923 5 note iknn1 7 ik2nn12n1 k1 2 k1 6 Pi 1 already done 2 713113 11 13n 13 2323 1313 2023 03 03 k3 k 13k3 k3 3k23k 13k3 3k1 71 n n n n 1 n323k2 3k132k2 3k2138 3m 1 k1 k1 k1 2 PS 7 1 3 1 3 S Zk27ltn3inn1 ngtinltn2in1 1gt H 3 2 3 2 1 3 1 1 1 1 1 nltn2 n gt n 2n23n1gn 2n1n1 g g electric circuit for charging a car battery page 129 problem 13 7 DC generator 7 12 V 15 Q 100 V 4 Q 10 9 Apply Kirchoff s voltage law and current law to determine the currents 1 The sum of the voltage changes around any closed loop is zero gt 412 12 1513 0 1513 100 1011 0 using Ohm s law V IR 2 The sum of the currents owing into a junction equals the sum owing out gt 1IQ13 0 4 15 1 12 gt 10 0 15 2 100 1 1 1 3 0 Hence we can t apply Gaussian elimination because the 1st pivot is zero section 32 pivoting We saw that Gaussian elimination breaks down if a pivot element is zero There are various pivoting strategies that overcome this problem partial pivoting Consider the reduced matrix at the beginning of step k in Gaussian elimination akk aim bk ank 39 39 39 arm bn 1f 61 0 nd index I such that alk maxaZk k S 139 S Then interchange row I and row k and proceed with the elimination 1 If A is invertible then Gaussian elimination with partial pivoting does not break down Math 571 2 lnterchanging the rows can be done implicitly using an index array to avoid the expense of moving the matrix elements in memory 3 Other strategies are scaled partial pivoting and complete pivoting but we won t consider these in detail 4 In practice pivoting is often applied even if the pivot element is nonzero g 1 1 6 1l1 gt gtlt 1 1 gtx2111 39exactsolution 1 11 2 0 1 gi1 g 1164 39 1 951 1 m21 6 Now consider the effect of roundoff error 1 e 1 1 952 11 0 l lgt gt 2 computed solution 7 inaccurate e 6 i 171 i 17770 Now apply pivoting in the presence of roundoff error 1 1 l 2 1 1 l 2 1 lt6 1 1 1 gt 0 1 1 1 gt 1 new computed solution 7 accurate 77121 E We ll explain this soon but rst we need a way to measure the size of a vector 8 Thurs 9 g 25 7 section 33 vector and matrix norms A vector norrn is a function satisfying the following properties 12 0 and 0 ltgt x 0 2 IIMII lal llfrll 3 y S triangle inequality ex n 12 xTx12 Euclidean length 2391 max95ilil39 177nl 7 Rf l g xlt2gt gt1257IIIEIIOO2 A note Givenavector norrn We can de neawby mfg at The following properties are satis ed 1 A20 and A0 lt3 A0 2 llaAll lalIIAII 3 AB IIAII MEN 4 llArrll IIAII llxll 5 IIABIIS AB 7 Pi A 00 thn1 A00 rnaxll Inaxzaij rnax rovv surn 0 llxlloo z g Alt A003 xltgtgtAxlt1gtgt Axum 1 0 llmlloo l 1 2 llAmlloo 2 05 gt Axlt gt gt 72 lt gt 2 llfrlloo 1 1 3 llAmlloo 3 A 7 m 1 i m 2 WHO 1 A thn1 A2n1axll xllg 0 952 l 0 l 2 l 2 T ex AA7lt2 2gtlt0 2gt7lt2 8 gt A2729208 Matlab 3 largest possible ratio rnaxX A is an eigenvalue of ATA section 34 error analysis Ax b A invertible x exact solution 7 x A lb 7 a approximate solution 7 e x a error 7 r b A9 residual note 1467 7 pi AeAx QEAx A9Eb Aicr g Then 6 0 if and only if r 0 However7 if is srnall7 there s no guarantee that e is also srnall 101 099 1 2 gt i 1 099 101 3 2 95 1 101 001 002 951 101 i 61 4101 e1oo i 0017 mi 4702 r100 7 002 2 1 002 mglt0gt 62lt I egoo17 r2lt 0702gt77200002 Hence we see that is small in both cases7 While e is small in case 1 and 100 times larger in case 2 How large can e be lell Tll L h S 1914777 7 Where MA A 1 condition number 95 g 101 099 A 099 101 gt All 2 A11ilta b 1 d bgtiilt 101 099gt c d ad bc c a 004 099 101 lt 2525 2475gt 77A7177W50 K007A100 g 2475 2525 pg 1 llAll 1b105 A95 gt 77 llfll 2 A67 gt 6A 1r gt 6A 17 S IIA 17 Hell 1 1 llAll llrll 37llellisllfl r7f f1i lmll llfll llbll llbll note mph llx xll lib Ell 1 gt 3 MA perturbation of RHS Ab Hll NW Axb Hf l HA AH 2 gt f S A perturbation of matrix 5 IWH HAN E 1 follows from previous theorem 7 2 hvv Hence the condition number controls the size of perturbations in the solution due to perturbations of the RHS and the matrix Q recall 6 1l1 gt gtlt 1 l1 gt gt 0521 exactsolution 1 1 2 0 ugh g x1139 Now consider the effect of roundoff error 1 lt6 l 1 gt 52 1 computed solution 7 inaccurate 0 1 1 0 explanation e 1 1 1 1 1 A Ail lt gt 00A272m4 lt1 1 1 1 e K le ll However Gaussian elimination reduces the system to upper triangular form 7 e 1 117 1 1g1 U lo l l U e 1l 0 6 gt tow 1 1 1 1 1 771 7 1 m 7 canbelar erthanrsooA I IHI I I gt 62 g H Hence a small change in the matrix or RHS of the reduced system eg due to roundoff error can produce a large change in the computed solution as in the previous example This means that Gaussian elimination is an unstable method for solving Ax b Pivoting however produces a different reduced system 1 1 2 gtlt1 1 12 gt f21exactsoluti0n e 131e 0 1 631 6 9311 39 711 71711 6 1 U7lt0 1 U i1 lt 0 1 KWU4KWA In fact Gaussian elimination partial pivoting lEEE arithmetic is stable in most cases section 35 LU factorization matrix form of Gaussian elimination We consider the 3 x 3 case but the general n x 72 case is similar 011 012 013 021 022 023 031 032 033 step 1 eliminate variable 951 from eqs 2 and 3 021 031 m21 i 7 m31 i 011 011 1 0 0 011 012 013 011 012 013 m 1 0 a a a 0 ref 7 76f 3 21 21 22 23 22 23 1 m31 0 1 031 032 033 0 1 032 aggj 1 0 0 011 012 013 011 012 013 0 1 0 0 122 123 0 cm agg U upper 0 m32 1 0 132 133 0 0 l 753 1 triangular EgElA U s A EflEglU 1 00 1 00 100 E1 m21 1 0 E1 mm 1 0 check E1Ef1 0 1 0 0 m31 0 1 mm 0 1 1 0 0 1 0 0 E2 0 1 0 Egl 0 1 0 check 0 m32 1 0 77132 1 1 0 0 1 0 0 1 0 0 EflEg 1 mm 1 0 0 1 0 mm 1 0 L lower m31 0 1 0 77132 1 m31 77132 1 triangular nal result A LU 10 Thurs 102 11 g 2 l 0 l 2 l 0 l 2 7i iii2 m21i 2 m32iggi 3 0 m31 0 check 1 0 0 2 1 0 2 1 0 LU 100 1 12 1Aok 0 1 0 0 g 0 1 2 note To solve Ax b 1 factor A LU 2 solve Ly b by forward substitution 3 solve U95 y by back substitution check AxLUxLyb g 2 1 0 1 A1 2 1bo 0 1 2 1 Using Gaussian elimination before we found 951 x2 x3 l but now we ll use LU factorization 1 0 0 31 1 31 1 1 1 01141011140 0 1 33 1 33 3 2 1 0 1 1 1 1 Mllmlllilmllll 0 0 g x3 x3 1 question So what s the point of LU factorization han legtMlH answer Some applications require solving Ax b for a given matrix A and a sequence of different vectors b eg in a time dependent problern Once the LU factorization of A is known we can apply forward and back substitution to the sequence of vectors b it s not necessary to repeat the LU factorization LU factorization and pivoting To perform partial pivoting we need to interchange rows and this is done here using an elementary permutation matrix The nal result is that instead of A LU we obtain PA LU where P is a permutation matrix Q 0 4 15 x1 12 10 0 15 x2 100 1 1 1 x3 0 We want to interchange rows 1 and 2 0 1 0 0 4 15 10 0 15 0 1 0 1 0 0 10 0 15 0 4 15 7 P 1 0 0 0 0 1 1 1 1 1 1 1 0 0 1 10 0 15 10 0 15 10 0 15 0 4 15 gt 0 4 15 gt 0 4 15 1 1 1 0 1 25 0 0 625 m2110 m32 025 mgi check PALU Then Ax b gt P1405 Pb gt LUx Pb and we can apply forward and back substitution to nd 05 1 0 0 yl 100 yl 100 Ly Pb gt 0 1 0 32 12 gt yg 12 01 025 1 33 0 33 13 10 0 15 951 100 951 688 U95 3 gt 0 4 15 x2 12 gt 952 480 0 0 625 x3 13 953 208 note If pivoting is required in more than one step we proceed as follows EngElPlA U 7 but it can be shown that PgEl E1132 see reView sheet E2E1P2P1A U PA LU where P PQP1 L EflEgl 11 Tues 107 13 section 81 2 point boundary value problem Find yx on 0 S x S 1 satisfying the differential equation y qxy Mac subject to boundary conditions y0 a7 y1 B7 Where qx7 Mac are given The equation models for example a steady state convection reaction diffusion system7 Where yx represents a velocity or temperature pro le nite difference scheme 1 choose an integer n 2 1 and de ne h m mesh size set xiihfori07177n1 meshpoints note x007xn11 yz exact solution 7 17 7 r7 recall Dyi 7 D yi 32 322 1 DDiyi DD73i D E DH2 DHz il i 1 yi1 yi Eli Ell71gt i yi1 2 Ell71 question How accurate is the approximation 3711 gal1 yx7 h expand in a Taylor series about 05 057 2 3 4 4 5 5 gm yz by w w m at Olth6gt 2 3 4 4 5 5 124 yz by h 2 21712 2711 y OW 239 2 239 22 hg DDyi yH ylyl 377 g 0h4 2nd order accurate h2 V T V discrete exact discretization approximation value error wz numerical solution 7 wz m 37 7 we oz 7 107711 wi1 210i 10221 h2 1 2 plt wi1lt2 Iih gt wi 10221 i 7239 gtqiwi 7 77239 177n w2lt2q1h2w1 ozr1 lt z n gt 2qnh2wn wni1 Tn 2q1h2 l wl r1ozh2 1 2 I2h2 1 W2 7 2 1 39 39 39 1 2 qn1h2 1 wnn r7171 1 2qnh2 wn rn h2 Ahwh rh 7 Ah tridiagonal 7 symmetric questions 1 Is Ah invertible for all h qx Mac 2 Can wh be computed e iciently 3 Does wh gt yh as h gt 0 ie does the scheme converge If so What is the order of accuracy LU factorization for a tridiagonal system Thomas algorithm b1 01 1 U1 01 a2 b2 02 12 1 u2 02 Cnil cnil an bn Zn 1 u nd L U b1 U1 gt U1 bl ak lkuk l ll akuk 1 for k 2 n bk lka1 uk gt uk bk lkail solve Lz r 21 7 1 lkzk4 2k rk gt 2k rk 1641 for k 2 n solve Uw z unwn Zn gt wn znun ukwk ckwk1 2k gt wk 2k ckwk1uk for k n l 11 note operation count 0n memory 001 if vectors are used instead of full matrices 2 pointhp ey y2x10ltx lt 1 y0 0 y1 0 e 10 3 solution yx 2x 1 sinh 1 3sinh j sinh 1 7 check h12 h14 0 02 04 06 08 1 0 02 04 06 08 1 h18 h116 0 02 04 06 08 1 0 02 04 06 08 1 h Hyh whllw llyh hwhlloo llyh hgvhlloo llyh hgvhlloo 050000000 0015872 0031744 00634890 0126979 025000000 0045420 0181682 07267300 2906920 012500000 0113164 0905315 72425200 5794016 006250000 0107705 1723287 27572593 4411615 003125000 0041333 1322659 42325086 1354402 001562500 0010982 0702852 44982586 2878885 000781250 0002790 0357233 45725862 5852910 000390625 0000701 0179364 45917351 1175484 1 For 0125 S h S 05 the error increases as h decreases This is due to the presence of boundary layers at x 0 1 2 For h 3 003125 decreasing h by 12 causes the error to decrease by 14 3 We see that yy wh00 0h2 so the method is 2nd order accurate 12 Thurs 109 16 review of two important properties of norms rst recall maxx1 7 llAmlloo A maX lloo 0 llxllw miaxz aij J property 1 llArrlloo lt llAlloo llxlloo property 21 IIABlloo S llAlloo 39 llBlloo pgquot 1 the proof is a bit abstract so instead we ll give an example which shows the idea of the proof xltgt Hi AH gtltgtlt fi iii tt 3fgtgtlt lgt ll illoo 2 llelloo 11 AWmax347107 llAfrlloo1131472Alloollfrlloo g physical meaning of property 1 If x is the input to a system and A05 is the output then the norm of the output is bounded by A00 times the norm of the input 2 here we give the actual proof AB 00 AB00 aXM by de nition 0 llmlloo A 00 B 00 3 max M by property 1 0 llmlloo SmaXIIAllwllBlloollxlloo 0 llrrlloo llAllooBlloo g by property 1 again note it follows that Ag00 lt 7 Ak00 lt for any k 2 l section 38 iterative methods Gaussian elimination is an example of a direct method for solving Ax b in the sense that the exact solution is obtained after a nite number of steps In practice the 0013 operation count is a serious obstacle when n is large and storage can be an issue too Here we consider an alternative class of methods called iterative methods which generate a sequence of approximate solutions 0 such that limH00 05k x As we shall see iterative methods have some advantages over direct methods Ax b ltgt x Bx c equivalent linear system n1 Back c xed point iteration B iteration matrix Jacobi method A L D U splitting D diaga11 7am 7 assume aiz y 07139 1 n 0 0 an am 021 0 0 I L E quot quot 7 U anilm am anynil 0 0 Axb ltgt LDUxb ltgt Dx LUxb ltgt x D 1L Ux D711 7 B D 1L U kaH L UMc b component form 011951 012952 013953 b1 gt allmlkll 012959 01395 b1 k 1 k k a211 022952 023953 52 gt 022 l l lt021i 02394 l 52 k 1 k k 031951 032952 033953 53 gt 033 l l lt031l 032 l 53 13 Tues 1014 18 g 2051 052 1 gt 295 15 1 x1 2952 1 gt Zac kll 055 1 The exact solution is 951 x2 1 Let x50 05g 0 be the initial guess xgk x k 0 12 34 78 Hence the numerical solution converges to the exact solution as k gt 00 6k x 05k error at step k 1 3 7 1 lleolloo17 61oo 7 62oo17 ll 3lloo gt llek1ll0072llekll00 thm Consider a xed point iteration of the form 11 Back c If lt 1 for any matrix norm then 0 gt x for any initial guess x0 Pi 6k x 05k Bx c Bfk10 Bx M4 B61611 6 36H BgeH Bkeo gt llekll llBkeoll E El 39 60 llBgll BB S BB llBll2 gt llBkll S llBllk gt llekll llBllklleoll gt0 as k WO important ekH Bek gt ek1 S 616 ifg BJ D1ltLUgt 3 3H 3 gt IIBJlloo i 1 Since BJ00 lt 1 the theorem implies that Jacobi s method converges 2 The error norm decreases by a factor of BJ00 in each step Gauss Seidel method ALDU as before Ab ltgt LDUxb ltgt LDx Uxb ltgt x LD 1UxLD 1b BGS LD 1U L Dxk1 ka b solve by forward elimination component form 011951 012952 013953 b1 gt a11ikD 012959 0139559 b1 021951 022952 023953 52 gt a22 kD lt021ikD 0239559 52 031951 032952 033953 53 gt a33 kD lt031ikD a32 kDgt 53 Hence xgk is used as soon as it s computed in contrast with J acobi g 2m m1gtmwkw 1 x1 2962 1 gt Zac k xgk 1 516 mgk 0 0 1 34 2 1516 3 3132 6364 Hence Gauss Seidel converges faster than J acobi lleolloo17 61oo7 62oo7 lleslloo gt llek1llooilleklloo Alti gtgtBGSLD71Uilti gt IIBGSI loo 1 Since ng00 lt 1 the theorem implies that Gauss Seidel converges hupampa 2 The error norm decreases by a factor of i in each step which is less than 1 g llBaslloo 14 Thurs 1023 20 summary 2 1 A l l 2 0 l 1 1 BJ a IIBJIIOO llek1lloo lleklloo l 0 Bas0 llBaslloo llek1lloo1lleklloo 4 We see that ek1oo lt B00 ek00 in both cases as required by the previous theorem but the bound is not sharp in the case of GS To explain the actual behavior of the error we need to examine the eigenvalues of the iteration matrix Q If Ax Ax Where x y 0 is a vector real or complex and A is a scalar real or complex then A is an eigenvalue of A and x is a corresponding eigenvector 8X 7 0 1 1 1 1 1 Alt1 0 permutationmatrixAlt1gt7lt1gt Alt1gtilt 1 1 gt A 1 is an e value With corresponding e vector x lt 1 AxAxx7 0 ltgt A AIx0x7 0 ltgt detA AI0 fAA detA AI characteristic polynomial of A The e values of A are the roots of the characteristic polynomial g fAA detA AIdetlt1 3A A2 10 gt Ai1 g thm If A is upper triangular then the e values of A are the diagonal elements pf i 011 a1n a11 A a1n A E gtA AI E 0 am 0 arm A fAA detA AI a11 A am A 0 gt A 61 for some 139 g 21 note Now we can explain why ek100 iek00 when Gauss Seidel is applied to 2 1 0 l Alt In this case we have Bag lt 1 2 0 1 A1 0 is an e value of Bag with e vector v1 check A2i v2ltigt check ace molt1gt lt3gtlt1gtv2 m 61 B60 Blt112 111 B112 B111 A2112 A1111 A2112 62 B61 BltA2112gt A2B112 A3112 Al l k l k k 6k 272 4 v2 gt lleklloo 4 v2oo L guestion What determines the rate of convergence of an iterative method pB max A is an e value of B spectral radius of B thm l ek1oo lt B00 ek00 for all k 2 0 error bound 2 ek100 N pB ek00 as k gt oo asymptotic relation This means that lim W B H lleklloo Hence the spectral radius of the iteration matrix pB determines the rate of convergence of an iterative method pf 1 Q 2 Math 571 but the idea is the same as in the example above recall 2 1 0 l Alt1 2 gt Bjig pBJ D C o H A O O iblHlOlH gt gt MEGS i g 15 Tues 1028 22 question Can we design faster methods Jacobi 1804 1851 Gauss 1777 1855 Seidel 1821 1896 Richardson 1881 1953 numerical weather forecasting Axb ALDU Recall the Gauss Seidel method L Dxk1 ka b ltgt kaH ka ka1D UMc b Now let w be a free parameter and consider a modi ed iteration kaH ka wka1D UMc b relaxation a 1 gt GS component form H1 16 k allml allxl Mallxl 012959 01395 b1 amx w 2959 Mumf lkll 2959 aggx k b2 a33 klll 03395 01a3105lkl1 a32 klll aggmgk b3 m 1 w gt 1 is called successive over relaxation SOR 2 wLDxk1 1 wD wUxkwb gt Bw wLD 11 wD WU g 2951 x2 1 gt 2059 2x51 429551 mg 1 x1 2952 1 gt Zac kll 2mg w x k1 2059 1 matrix form lti 3gtltgtklt2 3 2lt1iwgtgtltgtkwltigt gw gw 21iwgtwtfw Wilt notezw1 gt Bwlt0 0 gt Gauss Seidel pBw 025 g huhamha guestion Can we choose w so that pBw is smaller 23 thm Young 1950 1 If pBw lt 1 then 0 lt w lt 2 2 Assume that A is block tridiagonal symmetric and positive de nite and let 2 wt Then cut is the optimal SOR parameter in the sense that 1 11 pBJ2 NEW 111115mb m 1 lt MEGS lt MB lt l 0ltw 2 2 4 10718 returnto exam le wt p 1 1 pBJ2 1 14 2 1 11gt 1 11gt lleklloo l lleklloollemlloo 00000 00000 10000 05359 08231 04641 04641 09385 09798 00615 01325 09936 09980 00064 01047 09994 09998 00006 00944 1 i i i 00 1 1 0 NEW w 1 00718 Hence we see that optimal SOR converges faster than GS in this example Appwwwoz39v A is positive de nite if xTAx gt 0 for all x y 0 section 37 ex1 A lti gt is positive de nite EI172lt 2 1gtltx1gtx1x2lt2x1 x2gt 1 2 x2 0512952 2Q 2951952 95 053x1 x2 2 0 If x y 0 then either 951 y 0 or 952 y 0 but in any case we have xTAx gt 0 g ex 2 A lt2 is positive de nite 7 hvv ex3 39 Alt isniotpositive de nite EI172lt1 2gtltx1gtx1x2ltx12x2gt 2 1 x2 2051052 95 95 4951052 inde nite 1 for example 05 gt xTAx 1 x lt1 gtgtxTAx 2 16 Thurs 1030 24 note 1 The matrix representing the nite difference operator DD is tridiagonal7 symmetric7 and positive de nite7 and hence Young s theorem applies 2 1 1 2 1 1 1 2 2 The real advantage of iterative methods7 in comparison with direct methods7 is seen for BVPs in more than one dimension section 91 two dimensional BVP A metal plate has the shape of a square The plate is heated by internal sources and the edges of the plate are held at a given temperature Find the temperature at points inside the plate D x7y 0 S 0573 S 1 plate domain 7 x7y plate temperature fx7y heat sources 7 gx7y boundary temperature Then x7 3 satis es the following equations 82 82 A f for 9573 in D Poisson equation T m y Laplace operator note The Poisson equation arises in many areas7 eg if f is a charge distri bution7 then q is the electrostatic potential q g for 0573 on 8D Dirichlet boundary condition nite difference scheme h 711 mesh size 7 ml7 ih7jh 7 17739 07 711 1 mesh points g n 37 h i gt y HNUJlk 1234 ML73h exact solution 7 wij approximation 25 D Df DiDEwZj fij the truncation error is OULZ i7j1 5 point stencil 1717 7G 14 17 i7j71 1 P 102414 21027 wHJ 71 in 21027 wail fij 1 p 41 wiLj wHJ 1029141 WM fij Now consider What happens near the boundary 1 17 171 gt plt4w11 W21 1001 W12 W10 fii 41011 W21 W12 fii 901 910 1 2 3 4 5 6 7 8 9 W11 W12 W13 W21 W22 W23 W31 W32 W33 4 1 1 1 4 1 1 1 4 1 1 4 1 1 1 1 4 1 1 1 1 1 1 1 1 1 4 1 1 1 Ahwh fh T I I T I Ah 39 I I T T n x n 7 symmetric 7 tridiagonal Ah n2 x n2 7 block tridiagonal 7 symmetric 7 positive de nite pf omit 26 temperature distribution on a metal plate no heat sources Q5 q yy 0 boundary conditions Mac 1 1 7 x0 0y 1y 0 nite difference scheme 4wZj wiHJ wFLj tum1 tum1 0 h14 h18 h116 1 1 1 08 08 08 06 06 06 04 04 04 02 02 02 V0 0 5 1 V0 05 1 V0 0 5 1 h14 h18 h116 The plots above show the solution of the linear system Ahwh fh for given mesh size h The results below show the behaviour of the iterative methods the initial guess was the zero vector and the stopping criterion was 77 OOr000 lt 10 Jacobi h k llrklloollrmlloo MB 14 13 07071 07071 18 38 09238 09239 116 97 09804 09808 Gauss Seidel h k llrklloollrkillloo MB 14 7 04997 05000 18 19 08521 08536 116 47 09600 09619 optimal SOR h k ll r kllOOll rki loo MB 14 5 02645 01716 18 8 05124 04465 116 11 06855 06735 17 Tues 114 27 note 1 For a given value of h GS requires fewer iterations than J 7 and SOR requires fewer iterations than GS Whichever method is used more iterations are required as the mesh size h gt 0 Hence there is a tradeoff decreasing h leads to smaller truncation error but the computational cost is increased 2 The ratio of successive residuals converges to the spectral radius of the itera tion matrix as h gt 0 3 Explicit formulas can be derived for pB in this example pBJ cos7Th N 1 7T2h2 png cos27rh N 1 7T2h2 2 1 sin 7Tb 1 7Tb 1 2 39 N 1 pBJ 1s1n7rh 17Th N 1 27Th This recon rms the observation above that the iteration converges more slowly as h gt 0 since pB gt 1 in this limit SOR is least affected by this followed by GS and then J ie pBw lt png lt pBJ lt 1 4 Now consider what happens if Gaussian elimination is used to solve Ahwh fh 4 1 0 1 0 0 0 0 O ybwewoooo a Ah is a band matrix ie calj 0 for j gt m where m is the bandwidth in this example we have m 3 b As the elimination proceeds zeros inside the band can become non zero this is called ll in but zeros outside the band are preserved Hence we can adjust the limits on the loops to reduce the operation count for Gaussian elimination from 0013 to 001mg c Due to ll in more memory needs to be allocated than is required for the original matrix A This is a disadvantage in comparison with iterative methods of the form n1 Back c eg J GS SOR which preserve the sparsity of A

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "When you're taking detailed notes and trying to help everyone else out in the class, it really helps you learn and understand the material...plus I made $280 on my first study guide!"

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.