### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# 282 Class Note for CS 51500 at Purdue

### View Full Document

## 31

## 0

## Popular in Course

## Popular in Department

This 8 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at Purdue University taught by a professor in Fall. Since its upload, it has received 31 views.

## Similar to Course at Purdue

## Reviews for 282 Class Note for CS 51500 at Purdue

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/06/15

Chapter 8 KRYLOV SUBSPACE METHODS A more readable reference is the book by Lloyd N Trefethen and David Bau 81 Krylov Subspaces 82 Arnoldi Orthogonalization 83 Generalized Minimal Residual Method 84 Conjugate Gradient Method 85 Biconjugate Gradient Method 86 Biconjugate Gradient Stabilized Method 81 Krylov Subspaces Saad Sections 6 62 omitting Proposition 63 In a Krylov subspace rnethod xi 7 x0 6 ClA7713 span T07AT07HWA1171T0 We call ClA7713 a Kryloo subspace Equivalently7 xi 6 x0 span T07ATO77A1171T07 which we call a linear manifold The exact solution A lb x0 A lro The minimal polynomial of A is the polynomial p z cm1xm 1 01 co of lowest degree in such that pA 0 If A is diagonalizable7 m is the number of distinct eigenvalues To see this7 let A have distinct eigenvalues A17 A2 7 Am and de ne 167 Writing A XlXquotl7 we have pA XA 7 A11 A 7 A21 A i Am1X 1 0 If A has is no eigenvalue equal to zero7 A 1 is a linear combination of 17A7 7Am l7 1 1 1 1 eg7 1 7 1 7 7 1 To see this7 write A quot 1 cwlAm Z 01 coA l 0 Hence A lro E span r07Ar07 7Am lro7 A lb 6 x0 span r07Ar07 7Am lro7 and zm A lb This is the nite termination property In practice we do not go this far Review questions 1 What is a Krylov subspace 2 ls 0 span r07Ar07Ai 1r0 a Krylov subspace 3 Which of the following methods choose their search directions from a Krylov subspace cyclic coordinate descent steepest descent For each that does7 what is its Krylov subspace 4 What is the minimal polynomial of a square matrix A 5 What is the nite termination property Exercise 1 Use the characteristic polynomial of a nonsingular square matrix A to show that A 1 can be expressed as a polynomial of degree at most n 7 1 in A 82 Arnoldi Orthogonalization Sand Section 63 For numerical stability we incrementally construct orthonormal bases 114127 7qk for the Krylov subspaces However7 rather than applying the Gram Schmidt process to the sequence r0 7 Aro7 Ak lro7 we use what is known as the Arnoldi process It is based on the fact that each Krylov subspace can be obtained from the orthonormal basis of the Krylov subspace of one dimension less using the spanning set Q1 12 7 qk Aqk In other words a new direction in the expanded Krylov subspace can be created by multiplying the 168 most recent basis vector qk by A rather than by multiplying Ak lro by A We remove from this new direction Aqk its orthogonal projection onto 117 127 7 qk obtaining the direction Uk1 AQk qihik 12th quot39 Qkhkk where the coef cients hm are determined by the orthogonality conditions This computation should be performed using the modi ed Gram Schmidt iteration tAQk forj17277kdo t1qjqt hjkQ39rti ttqjhjk Normalization produces Qk1 Uk1hk1k The coef cients hij have been labeled so that we can write AQk Qk1Hk 81 where Qk ql7q27 7qk and Elk is a k 1 by k upper Hessenberg matrix whose 2397jth element for j 2 239 7 1 is hij Given the basis Q1 7 127 7 1177 we can express any element of the kth dimensional Krylov subspace as Qky for some k vector y Review questions 1 In the Arnoldi process for orthogonalizing a Krylov subspace K7A7r07 how is each new basis vector qk1 produced 2 The relationship among the rst k 1 vectors produced by the Arnoldi process for ClA7713 can be summarized as AQk Qk1Hk where Qk is composed of the rst k vectors What are dimensions of Elk and what other property does it have What is QgAQk in terms of Hk 3 Let Qk be composed of the rst k vectors of the Arnoldi process for KiA7T0 What is To in terms of Qk and HTOH7 169 83 Generalized Minimal Residual Method Sand Sections 65 6537665 GMRES generalized minimal residuals7 is a popular iterative method for nonsymmetric matrices A It is based on the principle of minimizing the norm of the residual Ho 7 AxHZ since the energy norm is available only for an spd matrix It is however not a gradient method rather it chooses for the correction M 7 0 that element of the Krylov subspace spanr0Ar0 Ak lro which minimizes the 2 norm of the residual For numerical stability we construct orthonormal bases q1q2 qk for the Krylov subspaces using the Arnoldi process We can express any element of the kth dimensional Krylov subspace as Qky for some k vector y Thus the minimization problem becomes myin Ho 7 Ax0 Q This is a linear least squares problem involving k1 unknowns and n equations The number of equations can be reduced to k 1 by using eq 81 to get Hb A950leH2 HroiAleHZ HPQi AleHZ Where P HTOH HQk1ltP51 EelMb Hpei Review questions 1 For GMRES applied to Ax b where A is n by n from what subset of R does one choose the approximation xk given an initial guess x07 2 How many matrix7vector multiplications are required for each iteration of GMRES 3 What is the optimality property of GMRES 4 For the GMRES solution from a k dimensional subspace one solves a least squares problem of reduced dimension What are the dimensions of the coef cient matrix in this problem and what is its special property 84 Conjugate Gradient Method Sand Sections 67 6113 Considered here is the case where A is symmetric Let Hk be all but the last row of Hk Then QgAQk Hk which is square and upper Hessenberg Since A is symmetric Hk is tridiagonal and we write 170 Clearly7 it is unnecessary to compute elements of Tk known to be zero7 thus reducing the cost from 0k2 to Assuming A is also positive de nite7 we choose M to be that element of 0 ICkAr0 which is closest to A lb in energy norm Hence7 each iterate M has the following optimality property 7 AilblH 7 A71le z 6 p0 ICkA7r0 In Section 73 this is shown to be equivalent to making the residual b 7 An orthogonal to ICkAr0 Writing pk 0 Qky7 this becomes 0 Q10 AM QTO AQW QPQI AQk y P51 Tklh a tridiagonal system to solve for y Although the conjugate gradient method can be derived from the Lanczos orthogonal ization described above7 it is more usual to start from method of steepest descent The conjugate gradient method constructs direction pl from the gradients 71377173 These directions are conjugate piTAPj 01mm or orthogonal in the energy inner product We skip the details and simply state the result x0 initial guess r0b7Az0 1907 0 forz39071727do T77 0 TA pi pi 141 t Otipii Tm H CHAIM T 71417141 191 r17p 1 1 if H The cost per iteration is 1 matrix vector Am 2 vector vector r111th piTApl 3 vector scalar vector 171 71 for a total cost of 1 matrixivector product and 5n multiplications For 71 4 71 71 the cost of matriXvector is 2571 multiplications We note that rl E ClA7713 and pl 6 KiA7T0 Moreover7 it can be shown that the gradients 7137117 riil constitute an orthogonal basis for the Krylov subspace convergence rate K2A 7 1 k We A lblll K2A Hm A lblll Slt H2A 1 7112 1 To reduce enery norm of error by 8 requires T log 7 iterations7 eg7 7 log 7 8 7r 8 Review questions 1 What does it mean for the directions of the conjugate gradient method to be conjugate 2 What is the optimality property of the conjugate gradient method 3 How many matrixivector multiplications are required for each iteration of CG 4 On what property of the matrix does the rate of convergence of conjugate gradient depend Exercises 1 Given xhrhpi one conjugate gradient iteration is given by CW TiTriPiTAPi 7241 H aiApi T 71417141 I T 1 T1 Ti 141 h Otipi 10m Ti1 where A is assumed to be symmetric positive de nite Suppose you are given static void axfloat X float y int n Xlength Given X this method returns y where yi ai O x0 ai n 1Xn 1 172 You should complete the following method which performs one conjugate gradient iteration7 overwriting the old values of xi7ri7riTrl7 and pl with the new values Your method must use no temporary arrays except for one float array of dimension n and it must do a minimum amount of oating point arithmetic static void Cgfloat X float r float rTr float p 2 Solve the following system using the conjugate gradient method with 1 0 4 1 0 2 1 4 1 z 6 0 1 4 2 85 Biconjugate Gradient Method Saad Section 731 Recall that for a symmetric matrix A one can nd by means of a direct calculation an orthogonal similarity transformation such that QTAQ T is tridiagonal And that for a general matrix one can get an upper Hessenberg matrix QTAQ H In fact7 for a general matrix it is possible to nd by means of a direct calculation a similarity transformation such that V lAV T is tridiagonal if one requires only that V be nonsingular Letting WT V l7 we can write this as two conditions WTAV T WTV I The columns of V 111Zquot391n and W 1Ule wn are said to form a biorthogonal system in that UJWJ39 Lanczos orthogonalization incrementally builds up the relations QgAQk Tk which if carried to completion would yield QTAQ T Similarly7 Lanczos biorthogonalz39zatz39on incrementally constructs Vk Wk Tk such that WJAVk Tk ngk I These relations can be exploited to obtain an approximate solution i 0 ka such that WJUJ 7 Ax 07 which simpli es to the tridiagonal system Tky WkTrO The columns of Vk are a basis for IC A71117 and the columns of Wk are a basis for ICkAT7w1 The method just described can be shown to be equivalent to the biconjugate gradient method7 which is given below7 without a derivation 173 x0 initial guess r0 b 7 Am choose r6 so that 711 31 0 p0 To p6 r6 for 071727do rm APNPQ 141 i Otipz39 Ti1 H CHAIM i I T 1 Ti 7 WA pix B 7 71417744 1 7 T Ti Ti 10m Ti1 549139 i pi1 1 6139in Convergence of BiCG is slower and more erratic than GMRES Also there is the possibility of breakdown when a denominator becomes zero Review questions 1 How close can one come to diagonaliZing a general matrix by a similarity transformation with a nite number of arithmetic operations What does is mean for the columns of V 11U2quot391n and W 1Ule wn to form a biorthogonal system Lanczos biorthogonalization applied to an n by 71 matrix A incrementally constructs matrices V7 Wk of dimension n by k and Tk of dimension k by k What relations are satis ed by these matrices and what is special about Tk How many matrixivector multiplications are required for each iteration of BiCG ap plied to a general system Ax b and with which matrices How does the convergence of BiCG compare to that of GMRES 86 Biconjugate Gradient Stabilized Method Saad Section 742 BiCGSTAB has signi cantly smoother convergence rate than BiCG Review question 1 How does BiCGSTAB compare to BiCG Be precise 174

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I made $350 in just two days after posting my first study guide."

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.