### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Special Topics CS 4803

GPA 3.81

### View Full Document

## 7

## 0

## Popular in Course

## Popular in ComputerScienence

This 0 page Class Notes was uploaded by Alayna Veum on Monday November 2, 2015. The Class Notes belongs to CS 4803 at Georgia Institute of Technology - Main Campus taught by Haesun Park in Fall. Since its upload, it has received 7 views. For similar materials see /class/234052/cs-4803-georgia-institute-of-technology-main-campus in ComputerScienence at Georgia Institute of Technology - Main Campus.

## Reviews for Special Topics

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 11/02/15

omputingl Ari linilroductbny quot e imiko th39n nptu im gl Prof Michael T Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright 2002 Reproduction permitted for noncommercial educational use only rmname mmmumm w 0 Introduction 9 Approximations a Computer Arithmetic 6 Mathematical Software muting o What is scientific computing 0 Design and analysis of algorithms for numerically solving mathematical problems in science and engineering 0 Traditionally called numerical analysis 0 Distinguishing features of scientific computing 0 Deals with continuous quantities o Considers effects of approximations 0 Why scientific computing 0 Simulation of natural phenomena 0 Virtual prototyping of engineering designs Waite mm tymwg 3 rm 7 new 0 Problem is wellposed it solution 0 exists 0 is uni u 0 depends continuously on problem data Otherwise problem is illposed 0 Even if problem is well posed solution may still be sensitive to input data 0 Computational algorithm should not make sensitivity worse Wain W972an mm 0 Replace difficult problem by easier one having same or closely related solution 0 infinite a finite a differential a algebraic o nonlinear a linear o complicated a simple 0 Solution obtained may only approximate that of original problem WW3 minimum 7 Soul 039 proximation 0 Before computation 0 modeling a empirical measurements 0 previous computations 0 During computation o truncation or discretization o rounding 0 Accuracy of final result reflects all these 0 Uncertainty in input may be amplified by problem 0 Perturbations during computation may be amplified by algorithm Waite mm tymwg is m Egt ggroximations 0 Computing surface area of Earth using formula A 47rr2 involves several approximations 0 Earth is modeled as sphere idealizing its true shape 0 Value for radius is based on empirical measurements and previous computations 0 Value for 7r requires truncating infinite process 0 Values for input data and results of arithmetic operations are rounded in computer Waite maymw IWZ 0 Absolute error approximate value 7 true value 0 Relative error absolute error true value 0 Equivalently approx value true value x 1 rel error 0 True value usually unknown so we estimate or bound error rather than compute it exactly 0 Relative error often taken relative to approximate value rather than unknown true value Wailing Mammy am 7 Dania E 0 Typical problem compute value of function f R a R for given argument 0 z true value of input 0 fz desired result a a approximate inexact input 0 approximate function actually computed 0 Total error 1392 90 1 7 1 1 7 WE computational error propagated dataerror WW may 7 Timme 39 nd Rounding Error 0 Truncation error difference between true result for actual input and result produced by given algorithm using exact arithmetic 0 Due to approximations such as truncating infinite series or terminating iterative sequence before convergence 0 Rounding error difference between result produced by given algorithm using exact arithmetic and result produced by same algorithm using limited precision arithmetic 0 Due to inexact representation of real numbers and arithmetic operations upon them 0 Computational error is sum of truncation error and rounding error but m r n 7 I Wilma Manama 33 0 Error in finite difference approximation Hm exhibits tradeott between rounding error and truncation error 1 h 7 M h o Truncation error bounded by Mh2 where M bounds if ti tort near as o Rounding error bounded by 2511 where error in function values bounded by e a Total error minimized when h z 2 eM 0 Error increases for smaller h because of rounding error and increases for larger h because of truncation error Warm Magnum Mr hm Example Finite Difference Approximation Otai error Forward and Backward Error 0 Suppose we want to compute y ag where f R a R but obtain approximate value 37 0 Forward error Ay 23 i y 0 Backward error A1 92 7 90 Where H93 33 f J Io 1 r backward 1391390139 fbrward 1391390139 ard and Backward Error 0 As approximation to y 37 14 has absolute forward error My r1 7 111114 7141421r x 00142 or relative forward error of about 1 percent 0 Since m 14 absolute backward error is Ax 12 7 341196 7 2r 004 or relative backward error of 2 percent Write maymw M011 o Idea approximate solution is exact solution to modified problem 0 How much must original problem change to give result actually obtained 0 How much data error in input would explain all error in computed result 0 Approximate solution is good it it is exact solution to nearby problem 0 Backward error is often easier to estimate than forward error Writ W ymm 9 o Approximating cosine function z cosz by truncating Taylor series after two terms gives yfltzgt1ix22 0 Forward error is given by Ag 22 e y M 7 117 3222 cosltzgt 0 To deterrnine backward error need value 92 such that 1 1 o For cosine function i arccosfz arccost Waite Mammy m m o Forz1 y f1 cos1 x 05403 g f117122 05 arccost arccos05 x 10472 Rgt 0 Forward error Ay g 7 y z 05 7 05403 700403 0 Backward error Ax 92 7 z 10472 71 00472 0 Problem is insensitive or wellconditioned if relative change in input causes similar relative change in solution 0 Problem is sensitive or illconditioned if relative change in solution can be much larger than that in input data 0 Condition number lrelative change in solutionl lrelative change in input datal w lAyyl 00nd liml 7 lMv l 0 Condition number is amplification factor relating relative forward error to relative backward error relative backward error relative forward error 00nd x 0 Condition number usually is not known exactly and may vary with input so rough estimate or upper bound is used for cond yielding mama minimum tth 0 Evaluating function f for approximate input 92 z Ax instead of true input 95 gives Absolute forward error fz Am 7 fz z f zAz m Am 7 m Mm Relative forward error 35 z 35 N i CondItIon number cond W 7 m 0 Relative error in function value can be much larger or smaller than that in input depending on particular f and z marina magnum o Tangent function is sensitive for arguments near 7r2 o tan1t57079 m1t58058 X 105 o tan1t57078 m 612490 X 104 0 Relative change in output is quarter million times greater than relative change in input 0 For I 1t57079cond m 248275 x 105 Write minimum in Rm 0 Algorithm is stable if result produced is relatively insensitive to perturbations during computation 0 Stability of algorithms is analogous to conditioning of problems 0 From point of View of backward error analysis algorithm is stable if result produced is exact solution to nearby problem 0 For stable algorithm effect of computational error is no worse than effect of small data error in input WW may 0 Accuracy closeness of computed solution to true solution of problem 0 Stability alone does not guarantee accurate results 0 Accuracy depends on conditioning of problem as well as stability of algorithm 0 lnaccuracy can result from applying stable algorithm to illconditioned problem or unstable algorithm to wellconditioned problem 0 Applying stable algorithm to wellconditioned problem yields accurate solution WW mm tymwg as m o Floatingpoint number system is characterized by four integers 3 base or radix p precision L U exponent range 0 Number 95 is represented as 7 d1 d2 dpil E miiltdog i pil B7 whereOgdlg ili0p71andLgEgU We minimum mm oint Numbers continued 0 Portions of floatingpoing number designated as follows 0 exponent E o mantissa d0d1dp1 0 fraction d1d2 dp1 0 Sign exponent and mantissa are stored in separate fixedwidth fields of each floatingpoint word Wine mm iymwg 3M 7 Tim F Iotl39rggjl omt Systems Parameters for typical floatingpoint systems U system B p IEEE SP 2 24 7126 127 IEEE DP 2 53 71022 1023 Gray 2 48 716383 16384 HP calculator 10 12 7499 499 IBM mainframe 16 6 764 63 0 Most modern computers use binary 2 arithmetic 0 IEEE floatingpoint systems are now almost universal in digital computers marine magnum 0 Floatingpoint system is normalized it leading digit do is always nonzero unless number represented is zero 0 In normalized systems mantissa m of nonzero floatingpoint number always satisfies 1 g m lt B 0 Reasons for normalization o representation of each number unique 0 no digits wasted on leading zeros 0 leading bit need not be stored in binary system Waite minimum Wt 7 Pr eatingPoint Systems 0 Floatingpoint number system is finite and discrete 0 Total number of normalized floatingpoint numbers is 25 71 p 1U 7 L 1 1 0 Smallest positive normalized number UFL BL o Largest floatingpoint number OFL BU11 7 3 o Floatingpoint numbers equally spaced only between successive powers of B 0 Not all real numbers exactly representable those that are are called machine numbers marina may o Tick marks indicate all 25 numbers in floatingpoint system having 2p3L71andU1 o OFL 1t112 X 21 3t510 o UFL 1 002 X 2 1 ots10 0 At sufficiently high magnification all normalized floatingpoint systems look grainy and unequally spaced like this SW33 iRo 0 It real number x is not exactly representable then it is approximated by nearby floatingpoint number z o This process is called rounding and error introduced is called rounding error 0 Two commonly used rounding rules 0 chop truncate base expansion of I after p 7 1st digit also called round toward zero 0 round to nearest z is nearest floatingpoint number to I using floatingpoint number whose last stored digit is even in case of tie also called round to even 0 Round to nearest is most accurate and is default rounding rule in IEEE systems E marina may mm 7 Mach e 0 Accuracy of floatingpoint system characterized by unit roundoff or machine precision or machine epsilon denoted by 5mm 0 With rounding by chopping emach 51 0 With rounding to nearest emach 51 0 Alternative definition is smallest number s such that 31 e gt 1 0 Maximum relative error in representing real number as within range of floatingpoint system is given by mama minimum 3m PM 7 Meant on continued 0 For toy system illustrated earlier 0 emach 025 with rounding by chopping o emach 0125 with rounding to nearest 0 For IEEE floatingpoint systems 0 emach 2 24 m 10 7 in single precision emach 2 53 10 16 in double precision 0 IEEE single and double precision systems have about 7 and 16 decimal digits of precision respectively Write Wit iymwg REEL 7 Medal on continued 0 Though both are small unit roundott emach should not be confused with underllow level UFL 0 Unit roundott emach is determined by number of digits in mantissa of floatingpoint system whereas underllow level UFL is determined by number of digits in exponent field 0 In all practical floatingpoint systems marina may mm o Normalization causes gap around zero in floatingpoint system 0 It leading digits are allowed to be zero but only when exponent is at its minimum value then gap is filled inquot by additional subnormal or denormalized floatingpoint numbers 0 Subnormals extend range of magnitudes representable but have less precision than normalized numbers and unit roundott is no smaller 0 Augmented system exhibits gradual underflow am 0 IEEE floatingpoint standard provides special values to indicate two exceptional situations 0 Inf which stands for infinity results from dividing a finite number by zero such as 10 0 NaN which stands for not a number results from undefined or indeterminate operations such as 00 0 Inf OI InfInf o Inf and NaN are implemented in IEEE arithmeticthrough special reserved values of exponent field WW mm tymwg 3M 0 Addition or subtraction Shifting of mantissa to make exponents match may cause loss of some digits of smaller number possibly all of them 0 Multiplication Product of two pdigit mantissas contains up to 2p digits so result may not be representable 0 Division Quotient of two pdigit mantissas may contain more than p digits such as nonterminating binary expansion of 110 0 Result of floatingpoint arithmetic operation may differ from result of corresponding real arithmetic operation on same operands manila minimum Wm o Assume 10p 6 0 Let 35 192403 x102 y 635782 x 101 o Floatingpoint addition gives as y 193039 x 102 assuming rounding to nearest 0 Last two digits of y do not affect result and with even smaller exponent y could have had no effect on result 0 Floatingpoint multiplication gives as gtk y 122326 x 102 which discards half of digits of true product manila mayer Wm 0 Real result may also fail to be representable because its exponent is beyond available range 0 Overflow is usually more serious than underllow because there is no good approximation to arbitrarily large magnitudes in floatingpoint system whereas zero is often reasonable approximation for arbitrarily small magnitudes o On many computer systems overllow is fatal but an underllow may be silently set to zero momma may 0 Infinite series n1 has finite sum in floatingpoint arithmetic even though real series is divergent 0 Possible explanations 0 Partial sum eventually overflows o 1n eventually underflows 0 Partial sum ceases to change once 1n becomes negligible relative to partial sum 1 lt 6match 1k 161 WW3 WQQIME 3 1 o Ideally z flop y z op y ie floatingpoint arithmetic operations produce correctly rounded results 0 Computers satisfying IEEE floatingpoint standard achieve this ideal as long as an op y is within range of floatingpoint system 0 But some familiar laws of real arithmetic are not necessarily valid in floatingpoint system 0 Floatingpoint addition and multiplication are commutative but not associative 0 Example it e is positive floatingpoint number slightly smallerthan emach then 1 e e 1 but 1 5 5 gt1 Wilma Wampum m 0 Subtraction between two pdigit numbers having same sign and similar magnitudes yields result with fewer than p digits so it is usually exactly representable 0 Reason is that leading digits of two numbers cancel ie their difference is zero 0 For example 192403 x 102 719275 x 102 128000 x 101 which is correct and exactly representable but has only three significant digits were WSIEIIWE an my 0 Despite exactness of result cancellation often implies serious loss of information o Operands are often uncertain due to rounding or other previous errors so relative uncertainty in difference may be large 0 Example if e is positive floatingpoint number slightly smallerthan emach then 1 e 7 17 e171 0 in floatingpoint arithmetic which is correct for actual operands of final subtraction but true result of overall computation 25 has been completely lost 0 Subtraction itself is not at fault it merely signals loss of information that had already occurred marinara magnum 45 mm o Digits lost to cancellation are most significant leading digits whereas digits lost in rounding are least significant trailing digits 0 Because of this effect it is generally bad idea to compute any small quantity as difference of large quantities since rounding error is likely to dominate result 0 For example summing alternating series such as 5 2 if i e 71m 3 2 for z lt 0 may give disastrous results due to catastrophic cancellation mama minimum as m Total energy of helium atom is sum of kinetic and potential energies which are computed separately and have opposite signs so suffer cancellation Year 1971 1977 1980 1985 1988 Although computed values for kinetic and potential energies changed by only 6 or less resulting estimate for total energy changed by 144 Kinetic 130 1276 1222 1228 1240 Potential 7140 71402 71435 71465 71484 Total marina magnum 4mm a d tic Formula 0 Two solutions of quadratic equation amz bx c 0 are given by 7b i V b2 7 4m m 2a 0 Naive use of formula can suffer overflow or underflow or severe cancellation o Rescaling coefficients can help avoid overflow and harmful underflow o Cancellation between 7b and square root can be avoided by computing one root using alternative formula 26 7b V b2 7 4m 0 Cancellation inside square root cannot be easily avoided without using higher precision Warm mayer 453m 0 Mean and standard deviation of sequence 1324 1 n are given by 1 n n E and a l o Mathematically equivalent formula 1 avoids making two passes through data 0 Single cancellation at end of onepass formula is more damaging numerically than all cancellations in twopass formula combined Mammayum a m o Highquality mathematical software is available for solving most commonly occurring problems in scientific computing 0 Use of sophisticated professionally written software has many advantages 0 We will seekto understand basic ideas of methods on which such software is based so that we can use software intelligently 0 We will gain handson experience in using such software to solve wide variety of computational problems manila minimum 4 Wt Desirable Qualities of Mathematical Software 0 Reliability 0 Robustness 0 Accuracy 0 Efficiency 0 Maintainability o Portability 0 Usability 0 Applicability matical Software o FMM From book by ForsytheMalcolmMoler o GSL GNU Scientific Library 0 HSL Harwell Subroutine Library 0 IMSL lnternat Math amp Stat Libraries 0 KMN From book by KahanerMolerNash 0 NAG Numerical Algorithms Group 0 Netlib Free software available via Internet 0 NR From book Numerical Recipes 0 NUMAL From Math Centrum Amsterdam 0 SLATEC From US Government labs o SOL Systems Optimization Lab Stanford U o TOMS ACM Trans on Math Software Warm magnum 493th Prof Michael T Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright 2002 Reproduction permitted for noncommercial educational use only rmname Minimum CD Existence Uniqueness and Conditioning Solving Linear Systems Special Types of Linear Systems 9 Software for Linear Systems Writ Wiring E m 7 Sweetnear Equations 0 Given m x n matrix A and mvector b find unknown nvector 1 satisfying A2 b 0 System of equations asks Can b be expressed as linear combination of columns of Aquot o It so coefficients of linear combination are given by components of solution vector 1 0 Solution may or may not exist and may or may not be unique 0 For now we consider only square case m n malaria magnum W3 Singularity and Nonsingularity n x n matrix A is nonsinguar it it has any of following equivalent properties 0 Inverse of A denoted by A l exists 9 detA y 0 0 rankA n 9 For any vector z 7 0 Az 7 0 0 Existence and uniqueness of solution to A2 b depend on whether A is singular or nonsingular 0 Can also depend on b but only in singular case 0 It b e spanA system is consistent A b solutions nonsingular arbitrary one unique singular b e spanA infinitely many singular b spanA none WW3 WQQIME 3W3 o In two dimensions each equation determines straight line in plane 0 Solution is intersection point of two lines 0 If two straight lines are not parallel nonsingular then intersection point is unique 0 If two straight lines are parallel singular then lines either do not intersect no solution or else coincide any point along line is solution 0 In higher dimensions each equation determines hyperplane it matrix is nonsingular intersection of hyperplanes is unique solution manila minimum E Example Nonsingularity o 2 x 2 system 2132 b1 5142 b2 951 7 b1 7 902i 7 ibzi 7b is nonsingular regardless of value of b or in matrixvector notation 2 3 A154 o For example it b 8 13 JT solution then 1 1 2T is unique o 2 x 2 system 23 46 901 A211 iii is singular regardless of value of b m2 0 With b 4 7T there is no solution 0 With b 4 SlT 1 39y 4 7 2y3T is solution for any real number 39y so there are infinitely many solutions WW3 minimum 3 m o Magnitude modulus or absolute value for scalars generalizes to norm for vectors 0 We will use only pnorms defined by n 1 llivllp Zlmil i1 for integerp gt 0 and nvector 1 z7 0 Important special cases a 1norm lwll zzglzil 1 2 o 2norm HmHQ ELM o oonorm llmllm maxi lzil W l minimum WE 0 Drawing shows unit sphere in two dimensions for each norm 15 46 L2 V 2 r 720 20 7143 o Norms have following values for vector shown erl 28 Hile 20 HwHoo 16 o In general for any vector 1 in R H1H1 2 HmHz 2 BMW 7 ector Norms O For any vector norm o HmH gt0im 0 o Mimi iVi rwr forany scalam 0 Nu yr rwr M triangle inequality 0 In more general treatment these properties taken as de nition of vector norm 0 Useful variation on triangle inequality 0 iii lii 7 M i S Hal 7 11H Write minimum r r mg Matrix Norms 0 Matrix norm corresponding to given vector norm is defined by 0 Norm of matrix measures maximum stretching matrix does to any vector in given vector norm 0 Matrix norm corresponding to vector 1norm is maximum absolute column sum TL iiAiil max iaiji 7 11 0 Matrix norm corresponding to vector oonorm is maximum absolute row sum n HAHoo my iaiji j1 0 Handy way to remember these is that matrix norms agree with corresponding vector norms for n x 1 matrix 1 Write minimum mg mg Properties of Matrix Norms 0 Any matrix norm satisfies 0 iiAii gt0ifA3 0 iivAii W iiAii for any scalam iiABii S iiAii MEN 0 Matrix norms we have defined also satisfy iiABii S iiAii MEN 0 iiAwH g HAM iiwii for any vectorw 7 0 Condition number of square nonsingular matrix A is defined by 0 By convention condA 00 if A is singular 71 HAM iiA lH MM 0 Hwii 0 Hmii condition number measures ratio of maximum stretching to maximum shrinking matrix does to any nonzero vectors 0 Since 0 Large condA means A is nearly singular WW3 maymw WE Properties of Condition Number 0 For any matrix A condA 2 1 o For identity matrix condI 1 o For any matrix A and scalar 39y condmA condA d 6 For any diagonal matrIx D diagdi condD maxi zi min itlon Number 0 Definition of condition number involves matrix inverse so it is nontrivial to compute 0 Computing condition number from definition would require much more work than computing solution whose accuracy is to be assessed 0 In practice condition number is estimated inexpensively as byproduct of solution process 0 Matrix norm llAll is easily computed as maximum absolute column sum or row sum depending on norm used 0 Estimating llA lll at low cost is more challenging E Warm magnum 392 WE 7 n d on Number continued 0 From properties of norms if Az y then llzll 7 7 S M 1ii M and bound is achieved for optimally chosen y 0 Efficient condition estimators heuristically pick y with large ratio llzllllyll yielding good estimate for llA lll 0 Good software packages for linear systems provide efficient and reliable condition estimator Waite WQQIME mg 0 Condition number yields error bound for computed solution to linear system 0 Let 1 be solution to A2 b and let 3 be solution to Azi b Ab o llAmi71then bAbAiA1A1AIIAA1 which leads to bound for possible relative change in solution 1 due to relative change in righthand side b Wain maymw marreg 7 Error Bo 0 Similar result holds for relative change in matrix it A Ei b then M condA Ill llEll MAN 0 It input data are accurate to machine precision then bound for relative error in solution 1 becomes o Computed solution loses about log10condA decimal digits of accuracy relative to accuracy of input manila minimum 2339 ME o In two dimensions uncertainty in intersection point of two lines depends on whether lines are nearly parallel wol liron dit ionod ill non di ti onod 7 Error Bon39 o Normwise analysis bounds relative error in largest components of solution relative error in smaller components can be much larger 0 Componentwise error bounds can be obtained but somewhat more complicated o Conditioning of system is affected by scaling o IIIconditioning can result from poor scaling as well as near singularity o Rescaling can help the former but not the latter WW swamquot 0 Residual vector of approximate solution 3 to linear system An b is defined by 0 In theory it A is nonsingular then llzi 7 mll 0 if and only if llrll 0 but they are not necessarily small simultaneously 0 Since H H 739 condA A llAll llivll small relative residual implies small relative error in approximate solution only if A is wellconditioned Amll lliill Wm magma illW3 0 It computed solution 3 exactly satisfies A Ei b then E W S H H llAll llivll llAll so large relative residual implies large backward error in matrix and algorithm used to compute solution is unstable 0 Stable algorithm yields small relative residual regardless of conditioning of nonsingular system 0 Small residual is easy to obtain but does not necessarily imply computed solution is accurate Warm W gmm we 0 To solve linear system transform it into one whose solution is same but easier to compute 0 What type of transformation of linear system leaves solution unchanged 0 We can premultipy from left both sides of linear system An b by any nonsingular matrix M without affecting solution 0 Solution to MAzl Mb is given by 1 MA 1Mb A lM le A lb WW mm tymwg 35 WE o Permutation matrix P has one 1 in each row and column and zeros elsewhere ie identity matrix with rows or columns permuted 0 Note that P 1 PT 0 Premultiplying both sides of system by permutation matrix PAzz Pb reorders rows but solution 1 is unchanged 0 Postmultiplying A by permutation matrix APzz b reorders columns which permutes components of original solution 1 AP 1b P lA lb PTA 1b Wane may 0 Row scaling premultiplying both sides of system by nonsingular diagonal matrix D DAzz Db multiplies each row of matrix and righthand side by corresponding diagonal entry of D but solution 1 is unchanged 0 Column scaling postmultiplying A by D ADzz b multiplies each column of matrix by corresponding diagonal entry of D which rescales original solution 1 AD 1b D lA lb marina mama o What type of linear system is easy to solve o If one equation in system involves only one component of solution ie only one entry in that row of matrix is nonzero then that component can be computed by division 0 If another equation in system involves only one additional solution component then by substituting one known component into it we can solve for other component 0 If this pattern continues with only one new solution component per equation then all components of solution can be computed in succession 0 System with this property is called triangular momma may 0 Two specific triangular forms are of particular interest a lower triangular all entries above main diagonal are zero aij 0 forz lt j 0 upper triangular all entries below main diagonal are zero aij 39 39 o Successive substitution process described earlier is especially easy to formulate for lower or upper triangular systems 0 Any triangular matrix can be permuted into upper or lower triangular form by suitable row permutation WW3 mommy skit S o Forwardsubstitution for lower triangular system La b 13971 961515117 95139 513972513973995739 MM 2 2n j1 for j 1 to n loop over columns if 677 0 then stop stop it matrix is singular 957 bjij compute solution component forz j 1 to n bl bl 7 61795 update righthand side end end WW3 minimum at WE o Backsubstitution for upper triangular system U2 b TL mnbnumh xi bii E uijmj r in711 ji1 forj n to 1 loop backwards over columns if ujj 0 then stop stop it matrix is singular 957 bjUjj compute solution component forz 1 to j 7 1 bl bl 7 uijzj update righthand side end end were WSIEIIWE mm 0 Using backsubstitution for this upper triangular system last equation 4x3 8 is solved directly to obtain mg 2 0 Next 953 is substituted into second equation to obtain m2 2 0 Finally both 953 and 952 are substituted into first equation to obtain 951 71 Wailing mayutm W3 0 To transform general linear system into triangular form we need to replace selected nonzero entries of matrix by zeros 0 This can be accomplished by taking linear combinations of rows 0 Consider 2vector a a1 a2 0 It a1 7 0 then 1 0 a1 7 a1 faga1 1 a2 7 0 Wm Wimllwg 5 o More generally we can annihilate all entries below kth position in nvector a by transformation 1 0 0 0 a1 a1 7 0 1 0 0 ak 7 ak MW 0 7mm 1 0 ak1 0 0 7m 0 1 an 0 Wheremiaiakik1n o DlVlSOfak called pivot must be nonzero WW mm tymwg 3e 3 niation Matrices continued 0 Matrix Mk called elementary elimination matrix adds multiple of row k to each subsequent row with multipliers mi chosen so that result is zero 0 Mk is unit lowertriangular and nonsingular o Mk If mkef where mk 0 0mk1 7771an and ek is kth column of identity matrix 0 M I mkeg which means Mk l Lk is same as Mk except signs of multipliers are reversed Wane may Elementary Elimination Matrices continued 0 If Mj j gt k is another elementary elimination matrix with vector of multipliers mj then T T T T Mij I7 mkek 7 mjej l mkek mjej I 7 mkeg 7 mjeJT which means product is essentially union and similarly for product of inverses Lij Example Elementary Elimination Matrices Example contlnued 0 Note that L1 Mfl and 1 M1M2 2 1 1 2 1 H 0 1 0 0 1 to 0 0 1 7 L1L2 0 To reduce general linear system An b to upper triangular form first choose M1 with all as pivot to annihilate first column of A below first row 0 System becomes MlAm Mlb but solution is unchanged 0 Next choose M2 using 122 as pivot to annihilate second column of M1A below second row a System becomes M2M1Aw M2M1b but solution is still unchanged 0 Process continues for each successive column until all subdiagonal entries have been zeroed Wailing mayutm nation continued 0 Resulting upper triangular linear system MnilM1Am MnilM1b MAzz Mb can be solved by backsubstitution to obtain solution to original linear system An b 0 Process just described is called Gaussian elimination Waite Wow 4 0 Product Lij is unit lowertriangular if k lt j so LM1 MflM1 LlLn71 is unit lower triangular o By design U MA is uppertriangular 0 So we have with L unit lowertriangular and U upper triangular 0 Thus Gaussian elimination produces LU factorization of matrix into triangular factors We minimum and 39o conlnued 0 Having obtained LU factorization A2 b becomes LUzz b and can be solved by forwardsubstitution in lower triangular system Ly b followed by backsubstitution in uppertriangular system U2 y 0 Note that y Mb is same as transformed righthand side in Gaussian elimination 0 Gaussian elimination and LU factorization are two ways of expressing same solution process manila minimum 4 0 Use Gaussian elimination to solve linear system i Silliillilb A113 72 73 7 3 0 To annihilate subdiagonal entries of first column of A 1 0 0 2 4 72 2 4 72 72 0 4 9 73 0 1 1 7 1 1 7 7 0 1 273 5 M1A 1 0 0 2 2 M 1b 72 1 0 8 4 1 0 1 10 12 WW WQQIME 45 S Example contlnued G To annihilate subdiagonal entry of second column of MlA 1 0 0 2 4 72 0 1 0 0 1 1 U7 0 71 1 0 1 5 72 1 4 M2M1A 0 0 Mngb 0 1 0 71 1 Example continued 0 We have reduced original system to equivalent upper triangular system 2472 m1 2 U21101 1924Mb 00 4 3 8 which can now be solved by backsubstitution to obtain 71 Example contlnued 0 To write out LU factorization explicitly 1 0 0 1 0 0 1 0 0 L1L2210010210L 71 0 1 0 1 1 71 1 1 sothat 2 4 72 1 0 0 2 4 72 A 4 9 73 2 1 0 0 1 1 LU 72 73 7 71 1 1 0 0 4 0 Gaussian elimination breaks down it leading diagonal entry of remaining unreduced matrix is zero at any stage 0 Easy fix it diagonal entry in column k is zero then interchange row k with some subsequent row having nonzero entry in column k and then proceed as usual 0 If there is no nonzero on or below diagonal in column k then there is nothing to do at this stage so skip to next column 0 Zero on diagonal causes resulting upper triangular matrix U to be singular but LU factorization can still be completed 0 Subsequent backsubstitution will fail however as it should for singular matrix manila minimum 45 o In principle any nonzero value will do as pivot but in practice pivot should be chosen to minimize error propagation 0 To avoid amplifying previous rounding errors when multiplying remaining portion of matrix by elementary elimination matrix multipliers should not exceed 1 in magnitude 0 This can be accomplished by choosing entry of largest magnitude on or below diagonal as pivot at each stage 0 Such partial pivoting is essential in practice for numerically stable implementation of Gaussian elimination for general linear systems i manila minimum 493 with Partial Pivoting 0 With partial pivoting each M1 is preceded by permutation P1 to interchange rows to bring entry of largest magnitude into diagonal pivot position 0 Still obtain MA U with U uppertriangular but now M Mn71Pn71 39 39 39 M1131 o L M 1 is still triangular in general sense but not necessarily lower triangular 0 Alternatively we can write PA L U where P RH P1 permutes rows of A into order determined by partial pivoting and now L is lower triangular Wilma W glhm 4mg 0 Complete pivoting is more exhaustive strategy in which largest entry in entire remaining unreduced submatrix is permuted into diagonal pivot position 0 Requires interchanging columns as well as rows leading to factorization PAQ L U with L unit lower triangular U upper triangular and P and Q permutations 0 Numerical stability of complete pivoting is theoretically superior but pivot search is more expensive than for partial pivoting 0 Numerical stability of partial pivoting is more than adequate in practice so it is almost always used in solving linear systems by Gaussian elimination Warm minimum Sim ME 0 Need for pivoting has nothing to do with whether matrix is singular or nearly singular o For example A 01 10 is nonsingular yet has no LU factorization unless rows are interchanged whereas Srr It 0 To illustrate effect of small pivots consider A e 1 1 1 where e is positive number smaller than 5mm 0 It rows are not interchanged then pivot is e and multiplier is 71ESO pr 1 0 ililE 11 L7 0 7 J6 Ul5 elle1 l5 eie1 in floatingpoint arithmetic but then 1 0 16 1 is 161 i 317 We W972an SEMI LU 0 Using small pivot and correspondingly large multiplier has caused loss of information in transformed matrix 0 It rows interchanged then pivot is 1 and multiplier is 76 so 10 10 Mle ll his ll 11711 0176 01 in floatingpoint arithmetic U 0 Thus 1 O 1 1 1 1 LUle1l01lle1l which is correct after permutation WW3 minimum SI 3 0 Although pivoting is generally required for stability of Gaussian elimination pivoting is not required for some important classes of matrices o Diagonaly dominant n E laijlltlajjl j1mn i1i j o Symmetric positive definite AAT and wTAwgt0 forall 0 will minimum was 7 v 0 Residual r b 7 Azi for solution 1 computed using Gaussian elimination satisfies M lt llEll lt p 72 m llAll lliill llAll where E is backward error in matrix A and growth factor 0 is ratio of largest entry of U to largest entry of A 0 Without pivoting p can be arbitrarily large so Gaussian elimination without pivoting is unstable 0 With partial pivoting p can still be as large as 2 1 but such behavior is extremely rare WW3 WQQIME its 0 There is little or no growth in practice so llrll llEll n m llAll Hill llAll which means Gaussian elimination with partial pivoting yields small relative residual regardless of conditioning of system 0 Thus small relative residual does not necessarily imply computed solution is close to true solution unless system is wellconditioned 0 Complete pivoting yields even smaller growth factor but additional margin of stability usually is not worth extra cost manila minimum its 0 Use 3digit decimal arithmetic to solve 0641 0242 ml 7 0883 0321 0121 mg 7 0442 0 Gaussian elimination with partial pivoting yields triangular system 0641 0242 1 7 0883 0 0000242 70000383 0 Backsubstitution then gives solution 2 0782 158 o Exact residual for this solution is 70000622 70000202 WWWQEW 52m 2 JT rb7Ai 0 Residual is as small as we can expect using 3digit arithmetic but exact solution is 1 100 100T so error is almost as large as solution 0 Cause of this phenomenon is that matrix is nearly singular condA gt 4000 0 Division that determines 952 is between two quantities that are both on order of rounding error and hence result is essentially arbitrary 0 When arbitrary value for 952 is substituted into first equation value for 951 is computed so that first equation is satisfied yielding small residual but poor solution B manila minimum of Gaussian Elimination 0 Gaussian elimination has general form of triplenested loop for for for aij aij 7 aikakkakj o lndices 2 j and k of for loops can be taken in any order for total of 3 6 different arrangements 0 These variations have different memory access patterns which may cause their performance to vary widely on different computers manila mayin LU Factorization 0 Despite variations in computing it LU factorization is unique up to diagonal scaling of factors 0 Provided row pivot sequence is same if we have two LU factorizations PA LU LU then L lL UU l D is both lower and upper triangular hence diagonal o If both L and L are unit lower triangular then D must be identity matrix so L L and U U o Uniqueness is made explicit in LDU factorization PA LDU with L unit lower triangular U unit upper triangular and D diagonal manila minimum 121 7 Silo ag39ement 0 Elementary elimination matrices Mk their inverses Lk and permutation matrices Pk used in formal description of LU factorization process are not formed explicitly in actual implementation 0 U overwrites upper triangle of A multipliers in L overwrite strict lowertriangle of A and unit diagonal of L need not be stored 0 Row interchanges usually are not done explicitly auxiliary integer vector keeps track of row order in original locations mama minimum in its olving Linear Systems 0 LU factorization requires about 7133 floatingpoint multiplications and similar number of additions 0 Forward and backsubstitution for single righthandside vector together require about 712 multiplications and similar number of additions 0 Can also solve linear system by matrix inversion 1 A lb 0 Computing A 1 is tantamount to solving n linear systems requiring LU factorization of A followed by n forward and backsubstitutions one for each column of identity matrix 0 Operation count for inversion is about n3 three times as expensive as LU factorization meme may r merrsionw actorization 0 Even with many righthand sides b inversion never overcomes higher initial cost since each matrixvector multiplication A lb requires 712 operations similar to cost of forward and backsubstitution o lnversion gives less accurate answer for example solving 3x 18 by division gives as 183 6 but inversion gives as 3 1 x 18 0333 x 18 599 using 3digit arithmetic 0 Matrix inverses often occur as convenient notation in formulas but explicit inverse is rarely required to implement such formulas o For example product A lB should be computed by LU factorization of A followed by forward and backsubstitutions using each column of B Wilma Manama gm o In GaussJordan elimination matrix is reduced to diagonal rather than triangular form 0 Row combinations are used to annihilate entries above as well as below diagonal o Elimination matrix used for given column vector a is of form 1 0 7m1 0 0 a1 0 0 1 iml 0 0 am 0 0 0 1 0 0 ak ak 0 0 7m 1 0 arm 0 0 0 7m 0 1 an 0 4541 iimination continued 0 GaussJordan elimination requires about 7132 multiplications and similar number of additions 50 more expensive than LU factorization 0 During elimination phase same row operations are also applied to righthandside vector or vectors of system of linear equations 0 Once matrix is in diagonal form components of solution are computed by dividing each entry of transformed righthand side by corresponding diagonal entry of matrix 0 Latter requires only n divisions but this is not enough cheaper to offset more costly elimination phase Wilma Warming o It righthand side of linear system changes but matrix does not then LU factorization need not be repeated to solve new system 0 Only forward and backsubstitution need be repeated for new righthand side 0 This is substantial savings in work since additional triangular solutions cost only 9n2 work in contrast to 9n3 cost of factorization WW W512 le ElSQ n Formula 0 Sometimes retactorization can be avoided even when matrix does change 0 ShermanMorrison formula gives inverse of matrix resulting from rankone change to matrix whose inverse is already known A 7 u39vT 1 A 1 l A 1u1 7 39UTA l39ufl39vTA 1 where u and 39v are nvectors 0 Evaluation of formula requires 9n2 work for matrixvector multiplications rather than 9n3 work required for inversion mailing mama 6 WE 7 Rank Qggting Solution 0 To solve linear system A 7 u39uT1 b with new matrix use ShermanMorrison formula to obtain 1 A 7 uvalb Ailb l A 1u1 7 39UTA I39ufl39uTAilb which can be implemented by following steps 0 Solve Az u for z so z A lu o Solve Ay b for y so 3 A lb 0 Compute 17 y va17 szz 0 If A is already factored procedure requires only triangular solutions and inner products so only 9n2 work and no explicit inverses marlin mayin Orne Updating of Solution 0 Consider rankone modification 2 4 72 m1 2 4 9 73 2 8 72 71 7 3 10 with 3 2 entry changed of system whose LU factorization was computed in earlier example 0 One way to choose update vectors is all viii so matrix of modified system is A 7 u39uT Wino mommy iii S a Iweaimt 0 Using LU factorization of A to solve Az u and Ag b 732 71 z 12 and y 2 712 2 0 Final step computes updated solution 2 732 77 ii ii i i 0 We have thus computed solution to modified system without factoring modified matrix 71 T y 2 myT Z We W972an mm O In principle solution to linear system is unaffected by diagonal scaling of matrix and righthandside vector 0 In practice scaling affects both conditioning of matrix and selection of pivots in Gaussian elimination which in turn affect numerical accuracy in finiteprecision arithmetic 0 It is usually best if all entries or uncertainties in entries of matrix have about same size 0 Sometimes it may be obvious how to accomplish this by choice of measurement units for variables but there is no foolproof method for doing so in general 0 Scaling can introduce rounding errors if not done carefully Wilma Wampum in mg 0 Linear system l5 El 2i ii has condition number 15 so is illconditioned it e is small o It second row is multiplied by 15 then system becomes perlectly wellconditioned 0 Apparent illconditioning was due purely to poor scaling o In general it is usually much less obvious how to correct poor scaling Wail minimum 0 Given approximate solution 20 to linear system An b compute residual r0 b 7 A1110 0 Now solve linear system Azo re and take 1131 1130 Z0 as new and better approximate solution since A1131 A10 l zo A1110 l Azo b7r0r0 b 0 Process can be repeated to refine solution successively until convergence potentially producing solution accurate to full machine precision WW mm tymwg m E o Iterative refinement requires double storage since both original matrix and its LU factorization are required 0 Due to cancellation residual usually must be computed with higher precision for iterative refinement to produce meaningful improvement 0 For these reasons iterative improvement is often impractical to use routinely but it can still be useful in some circumstances 0 For example iterative refinement can sometimes stabilize otherwise unstable algorithm mama minimum m m 0 Work and storage can often be saved in solving linear system it matrix has special properties 0 Examples include o Symmetric A AT aij aji for all z j a Positive definite wTAw gt 0 for all m y 0 a Band aij 0 for all lz ijl gt B where B is bandwidth of A o Sparse most entries of A are zero Waite Malawth tive Definite Matrices o If A is symmetric and positive definite then LU factorization can be arranged so that U LT which gives Cholesky factorization where L is lower triangular with positive diagonal entries 0 Algorithm for computing it can be derived by equating corresponding entries of A and LLT o In 2 x 2 case for example an a21 Z11 0 11 121 121 122 121 122 0 122 implies 111 xa117 121 1211117 122 ia22 131 main minimum 0 One way to write resulting general algorithm in which Cholesky factor L overwrites original matrix A is forj1t0n fork1toj71 for 2 j to n aij aij aik 39 ajk end end an xT 1739 for k j 1 to 71 aka akjan39 d WW3 minimum Equot a ation continued 0 Features of Cholesky algorithm for symmetric positive definite matrices o All n square roots are of positive numbers so algorithm is well defined 0 No pivoting is required to maintain numerical stability 0 Only lower triangle of A is accessed and hence upper triangular portion need not be stored 0 Only 7136 multiplications and similar number of additions are required 0 Thus Cholesky factorization requires only about half work and half storage compared with LU factorization of general matrix by Gaussian elimination and also avoids need for pivoting meme may o For symmetric indefinite A Cholesky factorization is not applicable and some form of pivoting is generally required for numerical stability 0 Factorization of form PAPT LDLT with L unit lower triangular and D either tridiagonal or block diagonal with 1 x 1 and 2 x 2 diagonal blocks can be computed stably using symmetric pivoting strategy 0 In either case cost is comparable to that of Cholesky factorization manila minimum 0 Gaussian elimination for band matrices differs little from general case only ranges of loops change 0 Typically matrix is stored in array by diagonals to avoid storing zero entries o If pivoting is required for numerical stability bandwidth can grow but no more than double 0 General purpose solver for arbitrary bandwidth is similar to code for Gaussian elimination for general matrices 0 For fixed small bandwidth band solver can be extremely simple especially if pivoting is not required for stability Wailing mayutm mm 0 Gaussian elimination without pivoting reduces to d1 b1 forz 2 to 71 mi aidiil i bi 771113121 end 1 0 0 d1 cl 0 0 m2 1 0 d2 62 L 0 7 U 39 0 mn1 1 0 dn1 cn1 General Band Matrices o In general band system of bandwidth 3 requires 9371 storage and its factorization requires 03271 work 0 Compared with full system savings is substantial if 3 ltlt n 39 d for Linear Systems 0 Gaussian elimination is direct method for solving linear system producing exact solution in finite number of steps in exact arithmetic o Iterative methods begin with initial guess for solution and successively improve it until desired accuracy attained O In theory it might take infinite number of iterations to converge to exact solution but in practice iterations are terminated when residual is as small as desired 0 For some types of problems iterative methods have significant advantages over direct methods 0 We will study specific iterative methods later when we consider solution of partial differential equations WW WQEIIHW germ 7 written 0 L INPACK is software package for solving wide variety of systems of linear equations both general dense systems and special systems such as symmetric or banded 0 Solving linear systems of such fundamental importance in scientific computing that LINPACK has become standard benchmarkfor comparing performance of computers 0 LAPACK is more recent replacement for LINPACK featuring higher performance on modern computer architectures including some parallel computers 0 Both LINPACK and LAPACK are available from Netlib manila minimum 3 rm 0 Highlevel routines in LINPACK and LAPACK are based on lowerlevel Basic Linear Algebra Subprograms BLAS o BLAS encapsulate basic operations on vectors and matrices so they can be optimized for given computer architecture while highlevel routines that call them remain portable o Higherlevel BLAS encapsulate matrixvector and matrixmatrix operations for better utilization of memory hierarchies such as cache and virtual memory with paging 0 Generic Fortran versions of BLAS are available from Netlib and many computer vendors provide custom versions optimized for their particular systems momma Wman

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.