### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Numerical Linear Algebra CS 51500

Purdue

GPA 3.68

### View Full Document

## 69

## 0

## Popular in Course

## Popular in ComputerScienence

This 12 page Class Notes was uploaded by Nick Rowe on Saturday September 19, 2015. The Class Notes belongs to CS 51500 at Purdue University taught by Staff in Fall. Since its upload, it has received 69 views. For similar materials see /class/208081/cs-51500-purdue-university in ComputerScienence at Purdue University.

## Popular in ComputerScienence

## Reviews for Numerical Linear Algebra

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/19/15

Chapter 7 PETROVGALERKIN METHODS 71 Energy Norm Minimization 72 Residual Norm Minimization 73 General Projection Methods 71 Energy Norm Minimization Saad Sections 53 521a 711 Methods based on approximate minimization 712 Steepest descent We assume that A is symmetric positive de nite A QAQT A diag A1n A gt 0 Unlike SOR7 there are no parameters to choose 711 Methods based on approximate minimization Minimization methods minimize some objective function z 15z17z27 7z phi 155 Possible objective functions i norm of error 7 A lbllg7 ii norm of residual 7 A lbH2 HAx 7 bHZ7 iii energy norm of error HAl2z 7 A lb H2 x 7 A lbTAx 7 A lb where AlZ QAlZQT is the unique symmetric positive de nite square root of A Each of these is a weighted 2 norm of the rotated error least desirable HAW 7 A lb H2 HAQTz 7 A lb H2 weights A17 A27 7 An HAl2z 7 A lbH2 HAlZQTz 7 A lb H2 weights A127 Ai27 7 All2 Hz 7 A lbllg HQTz 7 A lbH2 weights 1717 quot71 most desirable We denote the energy norm by xwTAw In some applications it measures the square root of energy or power We would like to minimize Hz7A 1bH2 but any minimization method would need to know A lb However7 7 A lblll2 x 7 A lbTA 7 A lb zTAz 7 2be bTA lb7 constant which attains its minimum at the same value of x as does 1 Me EeTAe 7 eTb7 and this is cheap to compute for given x Note unit circle in energy norm w l 1 w l HAlZQTwHZ 1 change variable AlZQTw u QAilZul Mb 1 QA V2 unit circle unit circle 12 712 A 0 unit circle Q A 39 unit circle 156 A elongation mm 7 max 7A 12 An 7 llAllzllA lllFm m The contours of 7 A lel are curves of equal error in energy norm Xn X1 For the 5 point difference operator on unit square T uiN71 1 E 7 ui1 7 ui1 7 h T ui7N1 The eigenvalues are 2Nsin 2 2Nsin 27 p7q 1727 7N 71 Amm 2712 Am 8N2 2 H A m 7N 2lt gt W generic minimization method x0 initial guess subscripts index iterates not components forz39071727do choose a direction pi choose a distance 041 141 i Otipii search direction One could choose ai to minimize Mm1L 0401 We have d d 0 2 T T T 1 T T Eda1L 0410i 7 E Apl api Am 7 pl b Am 7 b altTAM 7 p372 157 where n b 7 Am the residual Hence PiTH39 MAP 0 Note that Ti1 H WAD which is very cheap because we need to compute Apl anyway to get 041 Also note pip piTApi V 141 77 The matrix with the underbrace is a rank one approximation to A l cyclic coordinate descent Use search directions 617 627 76m 617 627 7 cm EITO 1 0 6177 011 6quot1 2 1 6277 022 T i 5i Told new 7 eld i 5i a M For descent in the 2th direction only the 2th component of residual is computed and only the 2th component is updated This is part of a Gauss Seidel sweepithe relaxation of the 2th unknown 71 steps of cyclic coordinate descent E 1 sweep of Gauss Seidel X2 X1 158 You can see why GS always converges if A is symmetric positive de nite Also7 you can see the reason for overrelaxation7 ie7 for multiplying the correction by w gt 1 The graph below shows that phiX01d alpha p01d quadratic I alphak 2alpha the objective function is reduced if we use may7 0 lt w lt 2 712 Steep est descent What is the direction of steepest descent Consider 1d 810 where 8 gt 0 is in nitesimal and HpHZ is xed We have 1d 810 l 1d EMTAWOM 810 0111 8PTb l ExoTldAxold 7 onldb 7 8pTb 7 Axold 082 W We can get greatest decrease in pTb 7 Azold pTr for xed HpHZ 139 by choosing p b 7A01d Told le7 the residual points in the direction of steepest descent x0 initial guess r0 b 7 Am for 071727do 77TH i TAHy i1 i am Tm H MAW CW 159 Convergence can be slow if H2A is large The trouble with these methods is that when we minimize in direction pi we lose the minimization in directions 101102 71011 This can be avoided if we use conjugate directions pZApj 7 01mm The conjugate gradient method constructs conjugate directions 1 from the gradients 71377117737 It is the subject of Section 84 Review questions H E0 00 r 9 CT I For objective function Mr De ne the energy norm associated with a symmetric positive de nite matrix A Solving Ax b where A is symmetric positive de nite is equivalent to minimizing what readily computable scalar function of x Relate this to the energy norm of the error What is the spectral condition number H2A of a symmetric positive de nite matrix A Given a search direction pl and an approximation xi to the value f that minimizes Mr xTAx 7 sz with A symmetric positive de nite7 what is the best choice for 144 How are search directions chosen in cyclic coordinate descent What known method do we get if we apply cyclic coordinate descent to objective function Mr xTAx 7 sz where A is symmetric positive de nite xTAx 7 sz where A is symmetric positive de nite7 what is the direction of steepest descent 1Often IyA ITAy is called the energy inner product 160 8 On what property of the matrix does the rate of convergence of steepest descent de pend Exercises 1 Let A a11 a12 7 b b1 39 a21 a22 52 Plotted below are the level curves contours of the function z zTAz 7 sz where z 1 2 T is variable Draw the straight line a11 a12M b1 and the straight line a21 a22M 52 Explain in words how you constructed these two lines Recall that relaxing the 2th variable so as to satisfy the 2th equation is equivalent to minimizing z in the direction of the 2th coordinate axis X2 major axis minor axis X1 2 Draw level curves for 7 A lel where 4 72 3 Ai72 4Jandbi0 Model your solution after page 156 of notes Label various key quantities and do a reasonably careful drawing 3 Show that in the generic minimization method if al is chosen to minimize Mm 0401 then n1 is orthogonal to pi lnterpret this geometrically 161 72 Residual Norm Minimization Saad Sections 533 532 5211 For a general matrix only the residual norm Mb 7 AzH can serve as an objective function for minimization This attains its minimum at the same value of z as does 1 z ExTATAz 7 xTATb This is the objective function for minimizing the energy norm for the normal equations ATAx ATb A generic minimization method with an analytic line search takes the form x0 initial guess r0b7Az0 fort012 do 7 7 7 choose a direction pi 0 ApiTTiApiTAPi 141 t aim Ti1 Ti CHAIM 711 Residual Norm Steepest Descent 712 Minimal Residual 721 Residual Norm Steepest Descent At a given point x the direction of steepest descent for the residual norm is 10 ATM leading to an algorithm requiring a multiplication by A and by AT at each step 722 Minimal Residual It is often the case that the matrix A is available only implicitly via a function call and it is inconvenient to request a matrix vector product with AT This is avoided in the minimal residual method by using Pi W for the search direction The disadvantage is that convergence for an arbitrary nonsingular A is no longer guaranteed Convergence7 however7 can be proved for the case where A AT is spd 162 Review questions H The 2 norm of the residual for Ax b is equal to the energy norm for what system of linear equations 2 Given a search direction pl and an approximation xi to the value x that minimizes Hb 7 AxHZ with A nonsingular7 what is the best choice for xi 3 For the square of the residual for Ax b7 what is the direction of steepest descent 4 How are search directions chosen in the minimal residual method 5 What is the disadvantage of the residual norm steepest descent method compared to the minimal residual method 6 For which nonsymmetric matrices A can it be proved that the residual norm steepest descent method converges to the solution of Ax b 7 For which nonsymmetric matrices A can it be proved that the minimal residual method converges to the solution of Ax b 73 General PetrovGalerkin Methods Sand Section 5 The term quotprojection method is unhelpful a better term is quotrestric tion it is not used in the 1997 book by Anne Greenbdum nor in the 2003 book by van der Vorst ii The statement bridging pages 130 and 131 does not seem correct except in a very contrived way Consider a method based on minimizing the square of the energy norm b 7 AxTA 1b 7 Ax7 x 6 x0 IC where A is symmetric positive de nite and IC is the span of one or several search directions Let x be the minimizer7 let r b 7 Ax7 and consider another feasible value x x w7 w 6 IC b 7 Ax TA 1b 7 Ax r 7 AwTA 1r 7 Aw rTA Ir 7 2wTr wTAw Hence7 as discussed previously7 w 0 minimizes the energy norm of the error if and only if wTr 0 for all it 6 IC Therefore the minimization problem can be expressed nd x x0IC suchthat b7AxlIC 163 This alternative formulation is sometimes called the Galerkm conditions If IC has a basis U7 then x x0 Uy and y is the solution of the system UTAUy UTro Where To b 7 Axo Similarly7 minimizing the residual norm can be expressed nd x 6 x0 IC such that b 7 Ax l AIC The general Petrov Galerkin method takes the form nd x6 x0IC suchthat b7Axl where is a subspace of the same dimension as IC lf IC has a basis U and has a basis V7 this yields the system VTAUy VTro to solve for y Where To b 7 Axo Review questions 1 Let A be symmetric positive de nite7 and let IC be a linear subspace Express the problem of minimizing the square of the energy norm b 7 AxTA 1b 7 Ax7 x 6 x0 IC as a Petrov Galerkin or Galerkin method 3 What is the general form of a Galerkin method for solving Ax b 9 Consider the Galerkin conditions nd x x0IC suchthat b7AxlIC for seeking an approximate solution to Ax b Express these as a linear system of equations given that IC RU where U is of full rank 7 Let A be symmetric positive de nite7 and let IC be a linear subspace Express the problem of minimizing the square of the residual norm b7 AxTb7 Ax7 x E x0IC as a Petrov Galerkin or Galerkin method Cf What is the general form of a Petrov Galerkin method for solving Ax b 164 6 Consider the Petrov Galerkin conditions nd 6 z0IC suChthat biAzl for seeking an approximate solution to Ax b Express these as a linear system of equations given that IC RU and 721 where U and V are of full and equal rank 165 166

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "I made $350 in just two days after posting my first study guide."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.