Scientific Computation ECS 231
Popular in Course
Popular in Engineering Computer Science
This 3 page Class Notes was uploaded by Ashleigh Dare on Tuesday September 8, 2015. The Class Notes belongs to ECS 231 at University of California - Davis taught by Zhaojun Bai in Fall. Since its upload, it has received 74 views. For similar materials see /class/187717/ecs-231-university-of-california-davis in Engineering Computer Science at University of California - Davis.
Reviews for Scientific Computation
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 09/08/15
ECS231 Handout 6 April 21 2009 1 2 03 The landscape of solvers for the linear system of equations Ax b where A is an n x n matrix I is an n vector Direct Iterative A LU u AU Nonsymmetric A LU GMRES Symmetric positive de nite A Cholesky CG A general framework for iteratiue projection methods The basic idea of projection techniques is to extract an approximate solution from subspaces of R Let W and V be two m dimensional subspaces of R and 0 is a good initial guess of the solution then the projection technique is to Find E E 900 W such that b 7 AETV 1 Write i 0 z z E W and de ne initial residual r0 b 7 Am Notice that b 7 AE b 7 Am0 l 2 r0 7 A2 Then the formulation 1 is equivalent to Find 2 E W such that re 7 AzLV 1a This is a basic projection step in its most general form Most standard techniques use a succession of such projections Typically a new projection step uses a new pair of subspaces W and V updated from the previous step and an initial guess mo equal to the most recent approximation This simple framework is common to many numerical computing lf W V then it is called orthogonal projection method Otherwise it is called oblique projection method The orthogonality constraints in 1a is known as the Petrov Galerkin condition Matrix Representation of iterative subspace projection methods Let V 121122 um be an n x in matrix whose columns form a basis of V and simi larly W w1w2 wm an n x in matrix whose columns form a basis of W Then any approximation solutions in 0 W can be written as im0zx0Wy ie 2Wy and orthogonality implies VTr0 7 A2 0 thus VTAWy VTTO y VTAW 1VTr0 provided VTAW is invertible Putting it all together we have 5 x0 WVTAW 1VTTO Now we have a prototype subspace projection method 0 Let do be an initial approximation 1 lterate until convergence 2 Select a pair of subspaces V and W 3 Generate basis matrices V and W for V and W 4 r0 lt7 b 7 Axo 5 y lt7 VTAW 1VTTO 6 0 lt7 0 Wy There are two important remarks 1 In many practical algorithms to be discussed below7 the matrix VTAW does not have to be formed explicitly It is available as a by product of Steps 2 and 37 eg7 the Arnoldi process 2 The method is de ned only when VTAW is nonsingular7 which is not guaranteed to be true even when A is nonsingular 3 There are two important special cases where the nonsingularity of VTAW is guaranteed a If A is symmetric positive de nite SPD and W V7 then VTAW WTAW is nonsingular b If A is nonsingular7 and V AW7 then then VTAW WTATAW is nonsingular 4 Optimality for orthogonal projection Theorem 1 Assume that A is SPD and that V W Then a uector i is the result of I if and only if Hm 7 m mgng Hm 7 m where Hart 7 MM xat 7 mTAz 7 z and mat is the exact solution to Ax b 900 W r 0 PROOF Notice that A7 is an inner product on R Thus Hm 7 MM over all possible z 6 0 W is minimized at i if and only if m 7 ETAW ie7 Az 7Ew b7Ai7w 0 for any w EWV This is I Remark The corresponding methods are steepest descent method and conjugate gradient CG method 5 Optimality for oblique projection a Theorem 2 Let A be an arbitrary square matrix and assume V AW Then a uector E is the result of I if and only if b7A b7A H at megtlfwll sz r Ag Amo Amo PROOF Mb 7 Amllg over all possible z 6 m0 W is minimized at i if and only if b 7 AETAlV7 Le b 7 Air 0 for any 1 6 AW V This is I Remark The corresponding methods are minimal residual residual MR method and gener alized minimal residual GMRES method Steepest Descent SD method An one dimensional subject projection process is de ned when W spanw and V spanv7 where w and u are two vectors In this case7 the new approximation takes form x 7 z z z aw and the condition 1a implies UTr 7 A2 UTr 7 aAw O7 and thus T u r 7 UTAw39 04 When A is SPD and at each step7 u w r the residual vector This yields STEEPEST DESCENT SD ALGORITHM 1 Pick an initial guess mo 2 Until convergence for k 0172 do 3 rk b 7 Axk T 4 04k 7215777 5 n1 90k 04ka 6 Enddo We can View that each step of the above iteration minimizes d f 1 5 HA elli 90 MTAW 90 over all vectors of the form x 7 04Vfz7 where Vfz is the gradient of f at m Recall that the negative of the gradient direction is locally the direction that yields the fastest rate of decrease for f