New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Numerical Linear Algebra

by: Otilia Murray I

Numerical Linear Algebra MAT 229A

Otilia Murray I
GPA 3.88


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Mathematics (M)

This 24 page Class Notes was uploaded by Otilia Murray I on Tuesday September 8, 2015. The Class Notes belongs to MAT 229A at University of California - Davis taught by Staff in Fall. Since its upload, it has received 55 views. For similar materials see /class/187454/mat-229a-university-of-california-davis in Mathematics (M) at University of California - Davis.

Similar to MAT 229A at UCD


Reviews for Numerical Linear Algebra


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/08/15
Chapter 2 Iterative Methods for the Solution of Linear Systems Let A17 b A E Cnxn nonsingular and b E C Two types of methods 1 Direct method LU and Cholesky factorization fast elliptic solver 2 Iterative method Krylov subspace method multigrid 21 Krylov Subspace Methods If A is sparse With nnz potentially nonzero entries then computing yAzt foranymEC is cheap at most nnz multiplications and additions Can we exploit this fact for the solution of A17 I Yes Krylov subspace methods CG CR CGNE GMRES QMR BICG BICGSTAB 22 The Conjugate Gradient CG Methods Assumption A gt 0 ie AH A and pTAp gt 0 for all p E C p f 0 CG is the classical Krylov subspace method CG is an iterative method moecnewlemamkecnem Ltk kth iterate I k b7 Ak corresponding residual vector Note I k 0 if and only if Ltk A lb is the solution of A17 17 Goal construct Ltk such that is small Where is some appropriate norm in C A gt 0 HmHA VmHAm is a norm on C is called the Anorm a A 1 gt 0 a HTHAA VTHA lT is a norm on C Note that HAHA4 for all 6 C The CG method is based on the error norm Hf llA llAil AllA1llb7 AllA1 Assume we have 16 6 C and we want to construct k1 17k 0441167 where pk E C pk 0 is a search direction 04k 6 C The level set gives 7 LtkHA 7 LEHA That is 417 qk where 1 2 1 H qltwgt 7 g Hw 7 will 7 ltw 7 w Altw 7 w Steepest descent Pk V4k 1403 i 17k b i AM Tim We can do better instead of the gradient use the conjugate gradient De nition Two nonzero vectors 1 y E C are conjugate wrt the matrix A gt 0 if HAy 0 Remark Algo p0 7 0 b 7 A170 For 77A TkHi Hi HTk1 H2 Pk1 Tk1 k1pk7pk1 T5 Wm One can show pprk 0 for all j k the vector pk are conjugate Choice of pk ltgt pkH1Apk 0 Algorithm 1 CG Method Input A routine to compute z Ap for any p E C A E Cnxn A gt 0 b E C 0 6 C onvergence tolerance tol gt 0 typically tol 10 6 Output Approximate solution wk m A b Set 7 0 b 7 A170 Set p0 7 0 for k 012 do if W g tol then llTO 2 Stop 116 w A lb end Set z Apk Seta TEJ imgt0 Ste 1enth k7 pkHz ipkHAPk P g Set mkil Ltk akpk Approximate solution Set n1 71k 7 akz 71k 7 akApk b 7 A1716 Residual TkH 1Tk1 llTk1H2 Set Bkil 17 722 Improvement this step k W Hulk Set pk1 Tk1 Jr kilpk Search direction end Remark About this algorithm 1 For each It 0l2 I k b 7 A k 2 Each kth iteration involves the following operations a matrixvector product z Apk b inner products in C pkHz and TkH1TkL c SAXPYs 117k1 17k akm Tk1 I k 7 akz Pk1 Tk1 k1pk 3 CG is a Krylov subspace method 171 170 040 E 170 Span To 172 171 11111 170 aoTo a1T1 51 170 00 a1 04151 C aoaiATo 6 170 Span r07 Are 17k 6 x0 Span T0 ATO7 A271 Ak lro KkAr0 C C kth Krylov subspace indeced by A and To 221 The dimension of KkA7 0 Note that K1Ar0 C K2Ar0 C K3Ar0 C C C KkA7 0 is de ned for any A E Cnxn and 7 0 6 C Recall that the minimal polynomial 1 of A is the unique monic polynomial 171 d yoerlernerd 2 Where 707177514 EC of smallest degree d such that A 701 71A 71414014 A l 0 6 CW De nition 1 The minimal polynomial 39i39 of A of A With respect to To is the monic polynomial of the smallest degree for Which A o 0 E C One can show that 39i39 is unique Note that det S deg 11 2 The grade of A With respect to To is de ned as dA7 0 deg Lemma if k g dAr0 k 1 dlleklA TO 40 if k gt dam 2 dAr0 rank we Are A m AHrOD 3 If A is diagonalizable7 then dA7 7 0 is the number of eigenvectors in an eigendecomposition of To dAgtTO 7 0 Z P1217 P1 6 Cy7139 0 j1 Azj Ajzj zj y 07 M y A for all j y Z Back to the case A gt 0 Theorem 1 ln exact arithmetic7 CG terminates after nitely many steps mk y m for all k 012dA7 0 71 17k 17 A lb 2 For all k l 2 dA7 0 Ltk 6 0 Kk A 7 0 is optimal in the following sense Hw 7 mu We Hw 7 will Note that HmHA xmHAm 3 For all k 1 2 dA7 0 7 7 k H17 kllA S2lt 1 CG ll0llA xE1 where A A max gt n HltAgt AMA 1 and AmaxA AminA gt 0 are the largest and smallest eigenvalues of A respectively Note that H is the condition number of A wrt In some applications we would like to minimize H17 7 A17H2 instead of 7 HA 23 The Conjugate Residual CR Methods The conjugate residual CR method is a variant of CG that generates iterates 1 E 170 KMAJ OL k 1727m7dA77 0 with HbiAitklb HbiAIYlly min EOKkATO To obtain CR simply replace the computation of two inner product in each step of CG CG CR TkH1Tk1 TIcH1ATk1 MHZ pprk ZHZ P51421197 2 Apt Catch One need to compute AMHL Remark 1 CR only works for A gt 0 2 The convergence rate k llTkll2 llb Akll2 S 2 71 CR HTOH2 llbiAil olb xEJrl 24 Preconditioned CG 241 Introduction to Preconditioning By CG or CR we obtain fast convergence of CC or CR if H 2 1 is small Basic idea of precondition Find equivalent system AA E Cnxn bb E C Amb Agt0 gt A w b A gt0 such that 1g HA ltlt MA Con icting goals 1 HA small 2 z Ap be easy to compute Preconditioner M E Cnxn M gt 0 M approximates A Formally write M LLH eg Cholesky factorization L E Cm A17 b ltgt L IAL HLH1 L lb ltgt A m b Note 14 L IAL H gt 0 Algorithm 2 Preconditioned CG Method Set b L lb and 176 LHmo Apply CG to A b with initial guess 6 Set 17k L Hm c In order to be more efficient than unpreconditioned CG 0 Faster convergence guaranteed if HA ltlt HA 0 Computing matrixvector products z A p is cheap Computing of z A p L IAL Hp L lAq L ltz 7 Solve LHq p for q L Hp 7 Compute t Aq 7 Solve Lz t for 2 Solution of linear systems with coef cient matrices LH and L needs to be cheap 242 Some Simple Preconditioners Diagonal precondition an 95 95 A a2 gt0 ajjgt0j12n 95 95 9 ann 1a11 0 0 LLH 0 W22 gt0 LLHdiagA E 0 0 0 ann Incomplete Cholesky factorization Instead of computing L with A LLH compute a sparser L such that A m LLH M Typical approach Prescribe the sparsity of L o ChoosegC jk l l S kltan 0 Construct L such that M f 0 j k E 9 For example 9 C j k l l S k lt j S n and ajk 0 Thus L has the same sparsity structure as the lower triangular part of A Recall Before the kth step of cholesky factorization MATRIX lncomplete version lkklk 1k ifj7k y J 0 otherwise 2 inngt69 0 otherwise 16 1 S 13972 Result If all goes well 2 L ljk such that ljk 0 if j c 9 and A m LLH M In general7 incomplete Cholesky factorization can break down due to ka S 0 However7 no breakdown can occur if A is an Hmatrix We de ne B bjk 2 0 by bjk 2 0 for all j7 k B E Cnxn7 B A t al d39 fB p Agraxg l spec r ra ms 0 where 0B is the set of eigenvalues of B De nition A matrix A E Rn is said to be an Mmatrix is there exists an s E R s gt 07 and a matrix B E Rn B 2 0such that AsI7B and MB lt3 s7bjj 171 a 1k 7bjkg0 iij k Remark A SI 7 B7 that ism De nition Equivalently7 a matrix A is a Mmatrix if ajk S 0 for allj k A is nonsingular7 and A 1 2 0 ref Greenls function Example Tm is an Mmatrix for any m 2 1 Note that Tm SI 7 B7 where s 2 and 0 1 0 B 1 0 20 K 1 0 1 0 The eigenvalue A of B are given by l Al2cos 7r 112 7m m1 So MB 2cosmi1 lt 2 8 Note that mem is also an Mmatrix for any m 2 1 De nition A matrix A ajk E Cnxn is said to be an H matrix if the matrix A EM 6 Rn de ned by A lajkl ifj k ajk 7lajkl 1f k k12n is an Mmatrix Theorem Let A E Cnxn A gt 0 be an Hmatrix Then for any set 9Cj7k11kltjni The corresponding incomplete cholesky factorization A m LLH exists 1f 91 C 92 then A m L1LH M17 A m Lng M2 Then L2 is less sparse the L1 1n general7 M2 is a better preconditioner than M1 SSORtype preconditioners Let A gt 07 write A 7 D0 7 E7 EH where Do gt 0 is the diagonal part of A and E is the strictly lowertriangular part of A De ne M 7 D 7 ED 1D 7 EH where D gt 0 is diagonal Motivation If D D07 then M 7 D0 7 ED0 1D0 7 EH 7 D0 7 E 7 EH 7 EDglEH 7 A EDglEH m A if EDglEH is small We rewrite L LH i H MD7ED 2D 2D7E A 7 L lAL H 7 DD 7 E 1AD 7 EH 1D D Di Computation of 1 N 1 1 N 1 z A D AD p D E whereEA 7 D p Note that E can be computed with roughly the same amount of work as z Ap That is7 SSORtype preconditioning is almost free To explain7 g D7 E 1D0 7 E7EHD 7EH 1 D7E 1ltD0 72DD7E DEHgt D7EH 1 7 D 7 E 1 17 D1D 7 EH 1 7 D 7 EH 1 N 4 E7123 7 D7Egt 1lt D1ltDEHgt1 gtltD7EHgt423 7 D7E 1 D1vv wv Algorithm 3 Eisenstat trick for SSOR preconditioning Input 6 C a routine to multiply by D17 to solve triangular linear systems with matrices D 7 E7 D 7 EH Output E 11 Solve D 7 EH U D for 39U Solve D 7 Ew p D11 for w Set E w v Note the two triangular solvers with D 7 EH and D 7 E require about the same work as multiplication by E 7 EH 25 Krylov subspace Method for General Linear Systems Consider Aw b where A E Cnxn is nonsingular7 b E C If A gt4 07 CG cannot be used directly because pgApk 07 breakdown Poor man7s use of CG to solve 96 ltgt AHA17 AHb normal equation since AH is nonsingular Note that AHA gt 0 and thus can be solved with CG Let 17 A 71k b7Amk Resulting method CGNE CG applied to the Normal Equations w n7leA wV7 HAHA7w w7Amt CGNE iterates 17k 6 170 7 KkAHAAHr0 such that llTkll2 llb Akll2 min llb Aillla CGNE error bounds wutlbeAww2lt2 vaHm44 k2ltMAgtk W HltAgt1 Web 7 HbiAmng since H UmaxA HA A HA 70min A 7 Where amax A HAH2 and am A HA 1 H2 are the largest and smallest singular value of A respectively Compared to the bounded 2 for CG With A gt 07 we obtain very slow convergence in general A second poor mans use of CG 95 ltgt AHAy b7 17 AHy Note that AHA gt 0 and thus can be solved With CG Resulting method Craig s method 11 A Hm Hz iyllAAH llil lly yk 6 yo KkAAHb 7 AAHyO ltgt 17k 6 170 KkAHAAH I 0 lterates 1k produced by Craigls method 17k 6 170 KkAHAAHr0 such that llf if kllg Wimlly min 6oKkAHAAH7 o Error bound k Hfimhlt miw Hvimh mm1 Same slow convergence as CGNE in general True Krylov subspace methods for A gt4 0 17k E0KkATO not k60KkAHAAHTO Minimal residual method for solving moecnamlanamkanw Where 16 0 KkA7 0 is such that HbiAitklb HbiAIYHy min EOKkATO Note If A gt 07 then conjugate residual algorithm is a minimal residual method Recall CG is based on short recurrences 117k1 17k 01141167 Tk1 Tic akApIw Pk Tk1 5141167 coupled twoterm recurrences Theorem For general A E Cnxn it is not possible to implement a minimal residual method With short recurrences to generate 1644 lterates from all previous iterations are needed Actual implementation Most elegant GMRE S Generalized Minimum Residual Method of Sad and Schultz 1986 26 The Arnoldi Process Goal Given A E Cnxn not necessarily nonsingular and 7 0 6 C construct orthonormal basis vectors v1v2vk for K116017713 k 172 Arnoldi process GramSchmidt tailored to Krylov subspace KMAJ O Span ToyAToyquk lTo Spanivlz391 27ukaiz AT 6 Kk1A7TOKkA7TO Algorithm 4 Arnoldi Process Input 7 0 6 C and a routine to compute g A1 for any 390 E C Output Vectors 0102 4 v E dltA I 0 Set vl E for k 12do Compute q Auk forjl2k do Set hjk qu k Set q q 7 hjkvj compute qk Auk 7 Zhjkvj j1 end Set hkLk H4H2 if hk 0 then Stop The Krylov subspace Kk A7713 has reached its maximal dimension7 ie7 K dA7 0 end set vk 1 i hk1k end Proposition Properties of the Arnoldi process 1 0 39f k 39 1 1 1 if k ie7 the lugs are orthonormal 2 The vk7s form a basis of the Kk A7 7 07SZ KkA7T0 Spaniviyva l Wkly k 1727m7dA77 0 3 For all k l7 2 7dA7 07 k hk1kvk1 Avk Zhjkvj j1 Note that 3 can be Written compactly AVk Vka qkef Vk1 k7 Where k k 1 Vk v1 02 vk 6 C7 Vk v1 02 vk vk1 C X and h11 h12 hlk h21 h22 0 h Hkhjllj12k1l12k 32 0 hkk71 hick 0 0 hark k is an upperHessenberg matrix With full column rank rank k kl N We denote Hk to be the k X k submatrix of Hk by deleting the last row of Hki Also7 orthonormality of the vk7s VkHVk kl Let m dA7 To Then AVm VmHm Hk VkTAVk for l S k S m since V16qu 0 27 GMRES 271 Minimal residual method 17k 6 170 KkAr0 such that b 7 A17 min b 7 Am i ll kllQ mkemoikA7 o ll2 Here7 170 6 CC 7 0 b 7 Amer Assume that 7 0 y 0 ltgt 170 y A lb Kk A7 7 0 Span T0 ATO A271 i i i Ak lroy GMRES Implementation of the minimal residual method using the Arnoldi process Recall that KMAJO 7 v 7 m i z e ck AVk 7 Vk1 ki VkHVk 7 kl k1gtltk TLXTL At step k of GMRE S 17 E 170 KkAr0 ltgt 17 170 sz for some z 6 Cl Corresponding residual vector 7 0 501 5Vk131 To Vk1ik P H A b 7 A17 7 b 7 Awe 7 AVk z 7 r0 7 VH1sz 7 Vk1 5131 7 sz 10 where 31 l 0 OF E RIHI Thus H127 Ang Hvk wlel 7 mall H5131 7 sz2 since VinleH 1164A7 Mb 7 Am 7 33 Mb 7 Am 7 6161 7 szH2 To get the kthe GMRE S iterate 16 1 Find zk E Ck such that H5161 7 Eczch min 5161 7 1sz LSk 2 ZECk 2 2 Set 16 0 szk L516 is a leastsquare problem With a k l X k matrix k of full column rank Is So there exists a unique solution zk E Ck and thus the kth minimal residual iterate 1k is unique 272 Solution of LSk Let Qk E Ck1xk1 be a unitary matrix ie7 Ik1 such that R Qka k 7 Where Rk ml 6 Cka is uppertriangular Thus rank kk rankRkk rjj 0fora11j12k Then min 5131 71sz min kalel 7 Rk z in fk isz zeck 2 zeck zeck Tk1 2 0 0 2 llb Akll2 H5191 szkH2 lTk1l by setting fk 71 e and choose z R Q1451 1 Tk k k fk Algorithm 5 GMRES Input 0 6 C a routine to compute matrix vector products q A1 a convergence tolerance tol Set 7 0 b 7 A170 and 51 HToHQ if 51 0 then stop 0 A lb is the solution of A17 I end 7 0 Set 1 7 1 1 fo r k12do Perform the kth step of the Arnoldi process Determine zk and Tk1 such that lTk1l if lt m2 Hb7AwkH2 gt mm HTOH2 llbiAwolb 51 0 szk and stop wk m A lb Blel7szkH min 2 zeC c BlelinzHQ g tol then end end 273 Computation of zk and n1 Recall elementary Givens rotations G C7 3 6 C2 Where M2 M2 1 7s 5 Notes 1 For any h 6 C2 9 can be Chosen such gh 2 2 G is unitary GHG 12 k 7gt Rk by a product of k Givens rotations 00 274 Update of the QR factorization for k 7 k 1 R 7 Qka k 0 0 T1k1 N Rk Hktl Tkk1 0 0 7mm 0 0 7mm SelectG 7 Ck 5k h h G k1Jc1 7 Tk1k1 S k17 77 7 suC tat 1671 N 7 0 et 3k1 k1 Tk2k1 0 1k 0 Q 1 Rk1 k a 7 V V Qk1 0 Gk 0 v Qk1Hk1 7Rk1 mJz12k1 0 0 0 7 1k 0 E 5191 7 1k 0 fk 7 fk1 Qk1516170 Gk 0 0 7 0 Gk Tkgl 7 7k 1 Where f 7 fk1 Ck k 7 Tk28k1739k1 28 Convergence of the Minimal Residual Method KMAJO Span r0Ar0 A Hro U VoTo 7114 39 7191141le 7139 E C v PM l 0 E 771 w 6 we KkATo ltgt w 7 we pATo 0 6 7361 thus b 7 A17 7 0 7 ApAT0 I 7 ApAT0 LAT07 Where q 17 ApOx E a q0 1 281 Minimal residual property HbiAwklb 116117 H4AT0H2 S HT0H2 11611ch Mle 40 q0 Theorem The iterates Ltk k l7 2 l l l generated by a minimal residual method such as GMRES satisfy W uwku2 I 7 17 qe 2 2 qltogt1 H4AH2 Moreover7 if A is cliagonalizable7 ilel7 A U 1AU7 U E Cnxn nonsingular A diag17 l l l 7 An then W2 lt MU min max MM ltgt HT0H2 167 j12n q01 Where MU HU lH is the condition number of U Wart the HH2norml Corollary If S C C is such that ALAQ l l An 6 S and 0 539 then 39 Av lt 39 s 116117 jrfffnlq 1M 7 1116117561 133M 401 q01 lm Figure 21 0A and S on the complex plane Proof Proof of Since A UAU l7 Ak UAkU l7 we have 4A U4AU71 and thus H4ltAH2 HU4AU 1HSHUHQHU lHQHqMMQ n2ltUgtFrlr3 ggqultAjgtw Together With 7 we have D Recall that if A UAU l7 then HmHQ 7 lt U Av lt U A we A gt Aginw M e H2lt gt 1334 M q01 q01 for any 5 Q C 0A C S S compact7 0 5 Note that q E R7 40 l q 17 ApOx for some p E R71l Consequence fast convergence of GMRES if the eigenvalues of A are clustered away from 0 lmA lmA X XX D A X D XX we X we Figure 22 Good spectrum distribution left and bad spectrum distribution right for GMRESi Example Eigenvalues 0A in a disk S m as e Figure 23 0A are clustered in the disk 5 Sl Miclgp Wherepltlcl 0 Si k qkltAgtlt175gt ea qklt0gt1 C Let I BSAcpel l0 q lt27r we have k k k FA Hews lt1 5 c lcl and A635 7 k p gm maxim 6 Hulk p k SH2Umaxl4k lH2U 7 761727uu HToH2 A65 lcl 282 A convergence bounded based on Bendixson7s Theorem For any A E Cm we have the decomposition AH AS z A 1 H 7 A AH A 7 AH 2 2 A Where AH Ag the Hermitian part of A7 and As 714 the skewHermitian part of Al 14 Theorem Bendixson s Theorem Let A E Cnxnl For any A E 0A7 minAH S Re S maxAH7 PAS S Im S PAS7 Where MAS mamegws W Note 0 5 if and only if Ami AH gt 0 or AmaxAH lt 0 Also7 Ami AH gt 0 ltgt A gt 0 lInA As ReA min AH max AH 73015 Figure 24 i Theorem If A E Cnxn and AH gt 07 then S 17 2 7 Where H2ltAHgt A min 2 H2ltAHgt Amt H 29 Preconditioned GMRES A17 177 A E Cnxn nonsingular7 b E C Preconditioner M E Cnxn nonsingular such that M M1M27 M17M2 E Cm and linear systems With M1 and M2 are easy to solve M m A in some sense Let 0 6 C be any initial guess for Then Amb ltgt Azt7170 b7A170 A 1 Z7 ltgt MflAMglM2 7 170 Mf1b 7 A170 gt A b 17 170 Mglm GMRES is applied to A b With 6 0 instead of A17 bl Fast convergence of GMRES if aA is clustered away from 0 Note aA 0M1 1AM2 1 0M2 1M1 1AM2 1M2 0M 1Al Thus7 M m A ltgt M IA m I ltgt 0A is clusteredl Algorithm 6 Preconditioned GMRES Input 0 6 C a routine to compute matrix vector products q Av routines to solve linear systems With M1 and M2 a convergence tolerance tol Output Approximation solution wk m A lb of A Solve Mlb b 7 A170 for 17 Set 17606C andrgb a T Run GMRES to solve A b With initial guess 6 to residual accuracy lt tol 0 2 Solve M271 17C for w and set 17k 170 w Note step 3 requires matrixvector products q A Mf1AM2 11l Each such product requires 1 solve With M27 1 multiplication With A7 and l solve With Hll 15 291 Some Preconditioners We Write A D0 7 E 7 F Where Do 7E 7F are the diagonal part Which is nonsingular strictly lower triangualar and strictly uppertriangular parts of A respectively Diagonal preconditioning M D0 Typically M1 D0 M2 I or M2 D0 M1 I Incomplete LU factorization PAQ m LU Where P Q are permutation matrices and L U are lower and upper triangular respectively Thus M1 M2 AA A m PTLUQT M SSORtype preconditioning Based on A D0 7 E 7 F The cost for q A1 are about the same as for q A1 210 Restarted GMRES WLOG A A E Cnxn is already the preconditioned matrix Recall kth stepof the Arnoldi process hk1kvk1 A39Uk hlkvl th39UQ hkkvk requires 1 multiplication With A k l inner products of vectors of length n k SAXPYs l division of n vector of length n by a scalar Storage of ALL vectors 3901 3902 vk1 For very large n the Arnoldi process becomes too expensive quickly Remedy Restarts Let 160 be the largest number of Arnoldi steps one is Willing to run Typical values 160 50 or 160 100 Algorithm 7 Restarted GMRE S Input 0 6 C a routine to compute matrix vector products q A1 a convergence tolerance tol the restart parameter k0 Stepl Set p0 H17 7 A0ll2 Step2 Run GMRES With a Mikel S tol or b k k0 is reached In case a stop wk m A lb In case b set 0 160 and repeat step 2 Remark a The error bounds derived for full GMRES are no longer correct for restart GMRES b Convergence of restart GMRES can be very sensitive to the choice of k0 PICTURE 211 The Lanczos Process Given A E Cnxn 7 0 6 C our goal is to generate vectors v1v2vk such that KkAr0 Spanv1vg i i i vk k 12HidAr0 using recurrences of xed length Recall Arnoldi process generates orthonormal vklsi Recurrences for the rst k steps are AVk Vk1 k Where Vk v1 v2 vn and k is de ned as in above sectionsi Since vkls are orthonormal we have VkHVk1 VkHVk VkHvk1l I Oli Thus N N VkHAVk 7 VkHVkHHk 7 Ht Where Hk E Cnxn is upper Hessenbergi Theorem The upperHessenberg matrix Hk E Cnxn produced by k steps of the Arnoldi process is the projection of A onto the kth Krylov subspace Kk A7 0 VkHAVk Hk Note that hj1 v gt 0 Special case A AH Hk VkHAVk VkHAHVk VkHAVkH That is Hk is Hermitian In fact Hk is real symmetric tridiagonal matrixi We set 11 52 0 0 52 a2 52 Tk 0 0 Hk a a a 5k 0 0 51c 0 Where aj fly 6 R j hj1j hjj1 gt 0 k 7 1 95 ltgt AVk Vka 0 0 k1vk1 ltgt AW 510171 1101 j1vj17 J39 1727m7k ltgt j1vj1 Avj 7 ajvj 7 jvj1 j 12Hi kl This means that for A AH the orthonormal vectors 39Uk can be generated by means of a 3term recurrencei Note that vaj1 0 we have aj ijAvj 7 jvj1 ngvji Arnoldi process for A AH ltgt the Hermitian Lanczos processi Algorithm 8 Hermitian Lanczos process Input 7 0 6 C a routine to compute matrix vector products q A1 for any 390 E C Where A AH Output Orthonormal basis vectors 39Uk for the Krylov subspace Kk A 7 0 Set 51 HTng if 51 0 then Stop 7 0 0 end Setv1v006 for k12do Compute q Avk Set 4 q kvkil Compute 04k 390qu Set q q 7 akvk Stop the Krylov subspace Kk A 7 0 has reached its maximum dimension ie k dA 7 0 end Set vk1 end Remark Properties of the Hermitian Lanczos process 1 Work per kth iteration is constant 1 product AU 2 inner products 2 SAXPYs 1 division of a vector by a scalar 2 If A AH gt 0 then the vk7s and CG residuals T591 are scaled versions of each other TEE mm for some Mk 6 C 3 ln niteprecision arithmetic the vectors 39Uk gradually lose orthogonality Reason We only enforce vk vk 0 1 va 0 4 Lanczos 3term recurrence k1vk1 A39Uk akvk 5191me CG coupled 2term recurrence n1 71k 7 agGApk 7 CG Pk1 61 Bk Pk Often coupled 2term recurrence behave a little better than 3term recurrence in niteprecision arith metic 212 Some Applications A AH E Cm After k steps of the Lanczos process VkHAVk Tk E Cka Vic 01 v2 quot39 vkiv VkHVk 11v 2121 Solution of Am b for A gt 0 170 E Cny To b0 i 141707 51 117 01127 Note that A gt 0 we have Tk gt 0 In particular Tk is nonsingular Solve Tkzk 5161 for 2k 6 Ck and set 16 0 szk One can show in exact arithmetic mkng k12 18 2122 Approximate Eigenvalues of A A aA ltgt Avv forsomevEC U750 We call A 390 an eigenpair Note that if A AH then all eigenvalues A E 0A are real Approximate eigenpair of A M U where 39U sz z E Ck z 0 Run Hermitian Lanczos for k steps with arbitrary 7 0 6 C 7 0 0 Vk Tk and Asz m Msz Now take VkH both side we have Tkz VkHAVkZ MVkHVkZ 2 Use the 39 of Tk as A A 39 t 39 of A Tkzj jzj j 12Hk we have approximate eigenpairs 1 szj of A l 2 k 2123 LargeScale Matrix Functions For example matrix exponential f C A C e 2 ANY converges for any A E C 10 J 00 1 V f Cnxn A Cnxn EA 2 7A 6 Cnxn converges for any A E Cnxn 10 Assume A AH then EA m VkETquot VkH Only need to compute an for an k X k matrix 16 ltlt n The general case A E Cnxn we can still have E T k10k1ek AVk Vka 0 0 klvk1l Vka k1vk1e V where Tk is tridiagonal But the vk7s are on longer orthonormal Introduce a second sequence w1w2wk such that Span lw17w2 wk KMATJE k 1727m7dA 70 where c E C is a left starting vectors Recurrence relations in compact form ATWk Wka 7k1wk1e W where Wk M1 11 wk and Tk is tridiagonal Notation the lugs and wk7s are called right and left Lanczos vectors respectively The lugs and wk7s are constructed to be Iiiorthogonal Uka 0 for alljfk jkl2 We can still choose a normalization of ck7s and wk7s We use HUME l H39wkHQ l for all k 12 H Algorithm 9 Nonsymmetric Lanczos process Input T070 6 C a routine to compute matrix vector products q Av7 s AT39w for any 1 E C here A E Cm Set 61 HT0H2 and V1 HCH2 if l0 0r710then Stop r000rc0 end I c Setv1707 w17 v0w006R 601 51 71 for k 12do Compute 6k 14ka if 6k 0 then Stop breakdown of the algorithm end Compute q Avk and s 14ka k Set 4 4 W67 39Uk71 171 wk q Set a 7 T Set q q 7 akvk Set 5 s 7 akwk Set 5 s 7 Bk 6k 6171 set BM 7 HqH2 and 7k1 7 M2 if Bk 0 then Stop the Krylov subspace Kk A7 7 0 has reached its maximal dimension k dA7 7 0 end if 7k 0 then Stop the Krylov subspace Kk AT70 has reached its maximal dimension k dAT7 0 end gt wk71 q 8 Set vk1 67 and wk VT 1 1 end Proposition 1 If no breakdown occurs7 then the algorithm in exact arithmetic stop When k min dAr0dATc 2 The vk7s satisfy the 3term recurrences 6k k1vk1 A39Uk akvk W67 39Uk7r 1671 The wk7s satisfy the 3term recurrences 6k 7k1wk1 14ka Otka Bk wk71A 71 a1 772 0 quot39 0 52 042 773 t 51 Tic 0 0 7 771717 J71 77k 0 0 5k 04k a1 52 0 0 72 042 53 6 Tic 0 0 7 Ej 6VJ 171 5k 0 0 7 04k 3 Bi orthogonality 51 0 0 T 52 Wk Vk Dk i i i i 0 0 0 5k Dk is nonsingular if no breakdowns occur 4 The Tk and T1 are related as follows Tk Dk lTkTDk 5 The algorithm can be modi ed so that one can continue in the case of a breakdown lookahead Lanczos 6 If A AH the nonsymmetric Lanczos process reduces to the Hermitian Lanczos process provided one choose 0 70 In this case 39wj j for all j 12 7 The matrix Tk represents an oblique projection of A on to KkA7 0 and orthogonal to KkAT c WEVkerkTAVk T1 Proof Note that AVk Vka 616440164435 multiply WE by the left 0 T T T T Wk AVk Wk Vka k1 Wk vk1 ek To summarize let A E Cm After k steps of the nonsymmetric Lanczos process we have AVk Vka k1vk1e ATWk I Vka 7k1wk1e WkTVk Dk Tk 1ngng Tk DglwkTAVk 213 Iterative Solution of A213 b Assume A nonsingular 0 6 C To b 7 Am 61 H I oH c E C c f 0 is arbitrary kth iterate 17k 6 170 KkAr0 ltgt 17k 170 szk zk E Ck Thus I k biA k biA o iAszk T To Vkazk k1ek zkvk1 Vic 5161 Tkzk tnefzwk 811106 To 5101 inei Choice 1 choose zk such that Tkzk 5161i Equivalently7 require that WETk 0 iiei7 39wTTk 0 for all M E KkATci So pk o T T T T 0 Wk Tic Wk Vklt lel Tkzk 7 51644616 zk Wk vk1 Which is equivalent to Tkzk 5161 The resulting method is called bicanjugate gradient BCG7 16 0 szk Where zk is the solution of Tkzk 5161 Notes 1 BCG generates CG to nonHermitian A7 but BCG has no minimization property 2 BCG has a second kind of breakdown if Tk is singular milk is not even uniquely de ned 3 Often the ECG method does not converge7 or convergence is erratic huge spikes in Choice 2 mimic GMRES Set a1 772 0 0 52 042 772 i 0 0 ecltklgtxki i i up 7 H Bk ak 0 0 k1 Then k 1 N Tic Vk iei Tkzk k1vk1efzk Vk1 ie L l Tkzk7 Where rank 1 k since 6253 i i i 76k1 gt 0 e can choose zk such that HTkH2 is quasiminimal HTng S lle1ll2 H5161 112le S Vk1 H5161 TkzkHA Choose zk E Ck such that 5181 Tkz H leliTkzkH min 2 ZECk 2 rank Tk k implies unique solution zki The resulting method is called quasiminimal residual method QMR 17k 170 szk Where zk is the solution of Notes 1 QMR xes the erratic convergence behavior of BCGi 2 16 can be easily updated from 1671 we only need to store 0162 0161 1 3 QMR satis es the same convergence results as GMRES7 With an extrea factor xk 1 For example7 mug k 1 U Avi HroHZ 2 1A2gt 4 214 Approximate Eigenvalue of A A 6 CW r0c 6 CC r0c y 0 Let A E aA 0AT we have ATv Av ATw Aw where 390 U 0are called a right and left eigenvectors respectively N Note that Tk Dk ngDk is a similar transformation we use 0Tk 01 0Tk as approximate eigenvalues of A N Tkzzz07 Tkyy7y0 for A E 0Tk From v we use the approximation AVk m Vka Multiply z to the left Asz m Vkaz Asz and thus we have Av m Av where v sz Similarly by w ATw m Aw where w ka Note that if A AH we have M E 215 Approximation of Transfer Functions Motivation dynamical systems Let 0 00 7gt R t t 2 0 is called statespace vector Let u 0 00 7gt R be a given input and yt t 2 0 is unknown Consider the case of timeinvariant linear systems Git 0170 but t2 0 Mt CTLEO where w d Assumption detG 7 C 0 has at most n solutions A E C n is very large and C G are sparse my 00 6 RM bc e R ut 7gt t 7gt yt not viable Can we use the data C C b c to directly compute the inputoutput behavior 1W A Mt Yesl at least in frequency domain Consider the Laplace transform of u 38 eistutdt s E C 0 Similarly we have s is the Laplace transform of 1t and yt respectively Fact 00 Do itdt 3 saw SEQ 0 0 Thus we have the Laplace transform of 8G 7 C s 1733 at 338 and thus 115 33 CTSG 7 C 1b s s E C 23 H is called the transfer function of the system Hs 7 cTsG 7 C 1b PICTURE Pick 30 E R such that 30G 7 C is nonsingulari Hs 7 CT3G 7 C 1b 7 CT 3 7 30G 3 C 7 Crlb 71 7 CT 306 7 CI s 7 so 300 7 Crl 0 b CT I 7 s 7 30A71 7 0 7 Where 7 0130G7C71b6Rn 7 A730G7C71G6Rnxn 7 06R Run nonsymmetric Lanczos A77 07c for k steps7 we have Tk and Hk3 1 CTToei 11c 8 50Tk71 e1 Hkso CT I O H50 Note Hk8 Cf 86k 7 Ck71 17167 Where bk CTT0917 Ck 17 Gk 7Tk Ck 71k 7 soTk Usually we have s 27rif7 Where the frequency 0 S fmin S f S fmax7 and plot log Visi log We Write I 7 s 7 swirl 7 25 7 soy39AJL j0 One can show that HMS 193 O 8 i 50 Hk is a Pad approximation of H7 Kk is optimal Basic idea of proof Hg 7 ZCTAJ39W s 7 so j0 Hks CT I O Z engel s 7 soy 13970 and is equivalent that CTAjTO CTTOeTTgel for all j 07172 i i 72k 71 We can also use the Arnoldi process A77 0i After k steps 31751 7 HTOHZ k 7 VkTAVk Use lm g and 61 do de ne an approximate transfer function s 23 Ik 7 s 7 so kgtilg Bk 61217 Bk 7ch e Rdi One can show that for the Arnoldi based approximate transformation HMS Hs O 3 7 806 Only half as accurate as the Lanczos based approximation 24


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Janice Dongeun University of Washington

"I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.