### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# LINR ALG & NUM ANLY AMATH 352

UW

GPA 3.59

### View Full Document

## 13

## 0

## Popular in Course

## Popular in Applied Mathematics

This 158 page Class Notes was uploaded by Mossie Pfannerstill V on Wednesday September 9, 2015. The Class Notes belongs to AMATH 352 at University of Washington taught by Staff in Fall. Since its upload, it has received 13 views. For similar materials see /class/192279/amath-352-university-of-washington in Applied Mathematics at University of Washington.

## Reviews for LINR ALG & NUM ANLY

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/09/15

AMATH 352 LECTURE 15 JULY 30 2008 1 Square systems For a while now well focus on linear systems of equations that are square This refers to the left side of Ax b and it means that the system has just as many equations as variables For example here7s a 4 gtlt 4 system 011 112 113 114 b1 021 122 123 124 52 031 132 133 134 53 041 142 143 144 b4 lfl write this as Ax b then the matrix A is square In particular well be looking at systems that have exactly one solution These types tend to come up in mathematical models and they7re often quite large so its a pretty big deal to describe them in general terms 2 Why square systems Square systems are a big deal because they7re the only type that can be invertible For a system AX b invertibility is actually a property of the matrix A that speci es the system De nition The matrix A is invertible if for every b of appropriate di mension there is a unique solution to Ax b So for every potential output b there7s exactly one input x that can be used to get this output This makes solving Ax b a well de ned problem When 1 specify b7 there de nitely is a solution7 and there7s only one7 so I dont need to impose any additional conditions A matrix can only be invertible under two conditions 1 It is square 2 The only solution to Ax 0 is x 0 The only thing that maps to the zero vector is the zero vector itself The rst condition essentially states that there are the same number of inputs as outputs If A has 7 rows and d columns7 then x 6 Rd and b E R If there are more rows than columns7 then the space of possible outputs is larger If there are more columns than rows7 then the space of possible inputs is larger So the number of rows has to be the same as the number of columns The second condition is a spiffy consequence of linearity It says that if I want to check whether the inputs and outputs correspond one to one one output per input7 then I only have to check this for the zero vector 3 Inverses If A is invertible7 then its inverse7 A l7 is de ned as follows De nition If an n gtlt 71 matrix A is invertible7 then its inverse7 denoted A l7 is the matrix such that for any b E R the solution to Ax b is x A lb If I know the inverse A l7 then solving any system Ax b is a piece of cake ljust multiply the right side by A l7 and l7m done 31 How to nd matrix inverses lf matrix inverses are so awesome7 then naturally I want to know how to nd them Conveniently7 its not too hard to do this using Gaussian elimination The key thing is a basic fact for all matrices A leL rst column of A71 A leg second column of A 1 A7163 third column of A71 etc All I have to do now is see where nding A leL and so forth ts into the de nition of an inverse Say I7m looking to solve the system AX1 e1 Then even if I dont know A 1 yet the solution satis es x1 A lel So if I solve for x1 using some other method like Gaussian elimination then that7ll give me a column of A l Then solving the system AXZ e2 gives me the second column solving AXg e3 gives me the third and so on Let7s try this out for a sample system First MI randomly generate a 3 gtlt 3 matrix conveniently a random square matrix is almost always invertible and set up e1 gtgt A rand33 A 05603 04226 02338 07646 00481 04259 07871 04875 08349 gtgt e1 1 O 0 Now I7ll apply Gaussian elimination to the system AX1 e1 followed by Jordan elimination gtgt Aech bech gausselim2A e1 gtgt Arech brech easyj ordan Aech bech Arech 1 O O O 1 O O O 1 brech 11658 21104 23312 At this step brech itself is the solution vector So my x1 and the rst column of A l is gtgt x1 brech X1 11658 21104 23312 Now PM do the same things for AXZ oz and AXg e3 gtgt e2 0 1 0 gtgt Aech bech gausselim2Ae2 gtgt Arech brech easyjordanAech bech gtgt X2 brech X2 16631 19754 04146 gtgt e3 0 0 1 gtgt Aech bech gtgt Arech brech gtgt X3 brech X3 11748 04166 20622 gausselim2 Ae3 easyjordanAech bech Assuming all has gone well7 now I can stick X17 X27 and X3 together to get A l l7ll assemble this matrix7 and call it Ainv gtgt Ainv X1 X2 X3 Ainv 11658 16631 11748 21104 19754 04166 23312 04146 20622 To check this7 l7ll take a random b gtgt b rand31 b 03270 04405 04343 If everything has gone as plannedl7 I can solve Ax b by using the inverse matrix I just calculated gtgtXAinvb N H 06036 00007 00493 gtgtAgtkx ans 03270 04405 04343 And that7s the same thing as b It worked Setting x A lb solves the system Ax b 4 Inverting a matrix with less typing The process in the last section is a little bit tedious Fortunately7 there7s a way to streamline this process While calculating A l7 we used three augmented matrices Ae1 Ae2 Ae3 1 When I apply Gauss Jordan elimination to any of these three systems7 the operations that are used only depend on A This means I use the exact same operations all three times So instead of making three individual augmented matrix7 why not just make one big one Al39316263 All I need to do is make sure that my code for doing Gauss Jordan elimination can handle multiple columns in the augmented portion of the matrix It just needs to apply any operations used to all columns independently And conveniently7 gausselim2 and easyjordan already do this As one nal note before moving along7 the augmented portion of my matrix is especially simple OHO 1 0 61 62 63 0 0 13 0 1 This is the 3 gtlt 3 identity matrix7 denoted 3 And in Matlab7 I can make this matrix by entering eye3 For larger systems7 the augmented portion will still be the identity matrix7 just scaled up to the appropriate size In Matlab7 I can generate the identity matrix for any positive integer n by typing eye 11 But for the time being7 l7ll try this out for the matrix A l7ve been using All three of my augmented columns are stored in a single matrix gtgt B eye3 1 0 0 0 1 0 0 0 1 From here7 ljust do Gauss Jordan elimination the same way l7ve been doing gtgt Aech Bech gausselim2AB Aech 10000 06193 10607 0 10000 09053 0 10000 Bech 0 0 12704 0 23507 22833 23312 04146 20622 gtgt Arech Brech easyjordanAech Bech Arech 1 O O O 1 O O O 1 Brech 11658 16631 11748 21104 19754 04166 23312 04146 20622 At this point7 Brech should contain my solution which is actually three solution vectors in a row And comparing this to Ainv that I found earlier7 this is the exact same thing 5 Why isn t this the end of the story The method l7ve outlined for nding inverses is pretty reliable Gauss Jordan elimination is nice and stable meaning its not going to crash and burn as a result of small errors Since that7s all l7m using this method of nding inverses is stable too However for large systems this is a lot of work If A is 1 000 gtlt 1000 which is actually fairly modest then nding A l involves solving 1000 dif ferent systems each with 1000 equations So unless I need to solve thousands of systems involving A this is pretty inef cient As it turns out there are better ways The most common is the LU decomposition in which A is written as a matrix product A LU 4 where the matrices L and U have particular forms Essentially this amounts to bookkeeping the steps that go into Gauss Jordan elimination in an ef cient manner that makes a general system Ax b easy to solve So from here rst well talk about what matrix products are and how they t into Gaussian elimination Then well see what LU decompositions are all about AMATH 352 LECTURE 21 AUGUST 13 2008 1 Basic Polynomial Interpolation The goal of polynomial interpolation is to approximate a continuous function f on an interval ab with a polynomial Interpolation does this approximation by matching the values ofpx to the values of f at certain points Say we7d like to create an interpolating polynomial of degree m Then the rst step is to choose m 1 interpolation points on the interval ab Call these 12 zm While the ordering doesn7t actually matter in the end we might as well say that these are listed in increasing order Also typically 1 a and zm b that is the endpoints of the interval are interpolation points After choosing the interpolation points we create a degree m polynomial p with undetermined coef cients 1096 ai zaz 962113 zmamr 1 l7m unconventionally sticking the powers of x on the left of each term since the as are actually the unknowns It also makes the numbering more com fortable Then l7ll match p to f at each of the m 1 interpolation points This produces the following system of equations 11 az l 13 l l am1in 11 az z ag g l l am13n 11 az s 13 l l am1 n 11 02m1 05an am x1 meJrl The setup of this system is the same thing we did for polynomial tting in the last lecture Only this time the system has an exact solution The code below will do this type of interpolation The rst three lines of code contain all the content you7d want to modify The rst sets the degree of the interpolating polynornial7 the second sets the function x7 and the third chooses the interpolation points a Degree of interpolating polynomial m 3 Pick a function f x sinx At interpolation points X linspace pipim1 y fX plotx y 77 Plot fX on a fine grid Xfine linspacex1 Xend lOOO hold on plotxfine fXfine hold off Set up the coefficient matrix M M zerosm1m1 M1 onesm11 for j2 m1 Mj X j1 end Solve for a Backslash is kosher this one time only Solving Ma y a My Plot the interpolating polynomial p polyvalflipuda Xfine hold on plotxfine p r hold off With a degree 3 polynomial the shape is pretty close but the t is a bit off Bump it up to degree 5 and the t becomes really nice at least if you7re just eyeballing it If you want you can use the ne grid to estimate how good the t is more precisely lncluding this line will calculate the farthest p gets from f on the ne grid err maxabsfxfine p Practically speaking though measuring error is where theory comes in For one thing doing calculations like this on a ne grid isn7t particularly ef cient For another no matter how many points are in your ne grid you7ll never catch all of them so theoretical bounds on the error are a bit more thorough This is a topic we wont be going into 11 A problem with basic polynomial tting Change the degree to m20 and set the interpolation points to X linspace010m1 7 When you run the code Matlab will produce a warning Warning Matrix is Close to singular or badly scaled Results may be inaccurate RCDND 1015378e 27 This has to do with the different sizes of elements that show up in M Just to get the extremes everything in the rst column ofM is equal to 1 Meanwhile the lower right entry is 1020 Although the code still seems to work alright these kinds of scaling prob lems are a bit dangerous and can end up killing your results This suggests that we look for a smarter way to do interpolation We might be able to make the calculations a bit more palatable by setting up the problem differently 2 Interpolating in Newton form For basic interpolation we wrote the polynomial p as 1096 ai zaz 962113 zmamr 3 3 In doing so were building it out of the m 1 functions 1z2 zm In fact these functions form a basis for the vector space of polynomials lle The idea behind Newton form is to build p out of pieces that are tai lored to our speci c problem With interpolation points 12 xmxm1 let7s set up p as p a1 x 7 zla2 x 7 x1z 7 zga3 x 7 x1 7 x2 x 7 sm1x 7 zmam1 4 Speaking theoretically this says we7re choosing a different basis for the vector space lP m Namely were using 1 731 7M x71x7z2 x7 zm71gtltz 7 2cm Writing out the general problem on paper this looks a lot more compli cated The payoff is when we actually try to nd the interpolating polyno mial Hopefully an example will help 21 An example Say that 17m looking for a degree 3 polynomial and I pick the four interpo lation points 1 1 2 2 x3 3 and 4 4 Then l7ll look for p in the form 10W 01 71a2 x 71z 7 2a3 x 71z 7 2x 7 3a4 5 I wont bother to specify the function fx as that doesnt really lend any thing to the example First lets see what happens when we set pz1 fz1 When I write out p1 all three terms containing a factor of m 7 1 disappear This leaves me with the extremely simple equation 01 f1 6 Next l7ll set p2 f2 This time the two terms with a factor of x 7 2 disappear and 17m left with 01 2 102 f2 7 When I set p3 f3 the last term vanishes and I end up with a1371a2371372a3 f3 8 4 Finally setting p4 f4 leaves me with a full equation a1 4 71a2 4 714 7 2a3 4 714 7 24 7 3a4 f4 9 After I simplify the numbers l7m left with the system of equations 11 11 12 a1 2m 2a3 f3 10 11 312 613 614 Given the function fx this system is really easy to solve using forward substitution 22 Computational issues When you write out the matrix for the system Ma y resulting from Newton form its lower triangular But otherwise it looks a bit unpleasant 1 1 2 7 x1 M 1 3 7 1 3 7 13 7 2 1 4 7 1 4 7 14 7 2 4 7 14 7 2 4 7 3 Trying to write code that produces this matrix is rather painful if you dont have a good battle plan What makes this manageable is that when I move along an individual row each term is equal to the previous term multiplied by one new factor So for example I can produce the 4th row using this code M41 1 for j2z4 M4j M4j 1 x4xj 1 end Note that I can stop the loop at j4 because all terms after this are zero This makes it easy to create the entire matrix using two for loops M zerosm1m1 M11 1 for i2zm1 Mi1 1 for j2zi Mij Mij 1 Xi xj 1 end end The other big issue is evaluating the polynomial in Newton form The coef cients I solve for feed into this formula p a1 x 7 z az x 7 x1z 7 zga3 x 7 zlz 7 x2 x 7 sm1x 7 zmam1 10 If I want to evaluate this for a particular value of s this looks a bit unpleas ant Again though there7s a trick Instead of doing a grunt calculation 1 can build up p from right to left so that each factor x 7 only shows up once in my calculation Say for example that there are four interpolation points So p a1 x 7 z az x 7 x1z 7 zga3 x 7 x1z 7 x2z 7 zga4 Visually my computation scheme looks like this p a1 x 7 x1 12 x 7 x2 13 x 7 zga4 For an input value t since X will be taken up by the interpolation points here7s how l7d calculate this in code a4 a3 tX3p a2 t x2p p a1 t x1p quotU U U It should be pretty obVious how to turn this into a loop that can handle larger polynomials l7ve done this in the program newtonpolyvalm and you can check the code for that if you7re curious 23 Matlab code The nal code for Newton interpolation ends up being pretty simple Actu ally7 up to the de nition of M7 it7s identical to the code I wrote down previ ously And this time around7 solving for a is based on forward substitution7 rather than the mystery of the backslash command Z Degree of the interpolating polynomial m 20 Z Pick a function f x sinx K At interpolation points X linspaceO10m1 yfx plotx y 77 Z Plot fX on a fine grid Xfine linspacex1 Xend lOOO hold on plotxfine fXfine hold off Z Define the coefficient matrix M M zerosm1m1 M11 1 for i2 m1 Mi1 1 for j2 i Mij Mij 1 xixj 1 end end Z Solve May using forward substitution a forwardsubstituteM y X Check out the interpolating polynomial p newtonpolyvalx a Xfine 7 hold on plotxfine p r hold off 24 Improved scaling If you run the code above it7ll solve the same problem that I used to complain about the scaling for basic interpolation And this time around the lower right entry of M is M2121 23202e12 This is an improvement by a factor of nearly 108 On top of that solving the linear system to nd the interpolating poly nomial is much more ef cient lf you7re cool with this terminology and its ne if you arent solving the full system for basic interpolation is 0m3 work However doing forward substitution to solve the system for Newton interpolation is only 0m2 work Not too shabby 3 Problems with interpolation With code this simple there7s a catch There7s always a catch Try doing interpolation with these speci cations in 10 f X11 X 2 X linspace 44m1 The function here is f which looks nice and smooth However here7s what the interpolation looks like See how the interpolating polynomial is blowing up at the edges That7s bad And if you use more interpolating points7 it only gets worse As it turns out7 this is a fundamental lirnitation if l7rn interpolating using equally spaced points There are essentially two ways to resolve this problem 0 Choose the interpolation points more sneakily This leads to so called spectral rnethods7 which are really cool7 but take a fair bit of work to develop 0 Do piecewise interpolation using polynornials ofsrnall degree eg quadrat ics This is more approachable In the next lecture7 well look at the idea of piecewise interpolation and how this can be used for numerical differentiation AMATH 3527 LECTURE 17 AUGUST 47 2008 1 Another approach to LU Last time7 we looked at a strictly written approach to LU factorization Today we7ll start out by looking at a more interactive approach in Matlab We7ll start off by setting L A and U I7 so that obviously LU A However7 L isn7t unit lower triangular like it needs to be well modify the columns of L one step at a time to make it lower tri angular And each step of the way7 we7ll record what we need to in U to maintain LU I Since Matlab7s output will be constantly updated7 l7m just going to enter the commands as I go along This will make a lot more sense if you7re following along in Matlab at the same time To set up the example7 download the script randomLU m7 which just makes a matrix with a pretty LU factorization Then enter gtgt rand twister 1000 gtgt randomLU This will set up the matrix A7 and show its LU factorization A 2 4 3 1 8 19 15 1 10 35 33 12 0 3 9 3 L 1 0 0 0 4 1 0 0 5 5 1 0 0 1 2 1 2 4 3 1 O 3 3 3 O O 3 2 O O O 4 Our goal will be to start with A and recover the factors L and U First make yourself an m le that7ll go through all the steps from scratch That way if you make a mistake you7ll still have all the commands in one place At the beginning of your m le enter these commands r A eye4 C1 Then at the end enter these commands ClC L U LU LU A This will display all the working pieces of our factorization meaning that you can run the m le at any time to see how things stand Every future command here should come after the rst two lines you entered and before the last ve lines here We7ll start the modi cations with Column 1 All we7re doing here is scaling this column so its rst element is 1 ooColumn1 L1 L1 2 However if you run the script now you7ll see that the rst column of LU has changed To keep this the same as the rst column of A you have to enter U11 2 Essentially this compensates for the factor of 2 that was divided out of L 1 If you run the script again you7ll see that the rst column of LU is identical to the rst column of A once again This takes care of Column 1 so we7re on to Column 2 This time we have to start by using Column 1 to delete the rst component In a manner of speaking Column 1 controls7 the rst element e1 00 Column 2 L2 L2 LZ14 Now you7ll see that the rst element in L 2 is zero However the second column of LU has changed To x this I need to record that factor of 4 in the appropriate spot in U U12 4 When you take the product LU this 4 makes sure that L 14 is added into the second column of the product to compensate for what was just subtracted from L 2 You can think of the index of this entry 12 as an indicator of how Column 1 of L affects Column 2 of the product LU Now there should be a 3 in position L22 To make L unit lower triangular this needs to become a 1 So l7ll divide this column by 3 and compensate in the appropriate position of U L2 L2 3 U22 3 This nishes Column 2 Now we7re on to column three First we need to use Column 1 to delete the rst element and Column 2 to delete the second lt7s important to do these steps in order 7 if you reverse them it7ll screw things up Try it if you like 00 Column 3 L3 L3 L13 U13 3 L3 L3 L23 U23 3 Next I need to scale away the 3 sitting on the diagonal L3 L3 3 U33 3 And this nished Column 3 Column 4 is more of the same First7 its rst three columns are deleted by using Columns 1 through 37 in order Then7 it7s scaled so that its diagonal entry is 1 00 Column 4 L4 L4 L11 U14 1 L4 L4 L23 U24 3 L4 L4 L32 U34 2 L4 L4 4 U44 4 The end result is L 1 O O O 4 1 O O 5 5 1 O O 1 2 1 U 2 4 3 1 O 3 3 3 O O 3 2 O O O 4 This is the same as the factorization we were given back at the beginning The key thing to get from this is that each column controls one element Column 1 controls the rst element7 which is the position corresponding to e1 Once Column 1 has done its thing7 Column 2 controls e2 Then Column 3 controls e3 And nally Column 4 controls e4 even though there arent any columns left to control it in 2 LUP factorization LU factorization has one fundamental problem Start a new m le7 and enter these commands rand twister 1000 A rand44 for i1 4 Aii0 end The for loop places zeros along the diagonal of A7 which now looks like this A 0 08725 02331 03922 01150 0 08417 01823 09503 00407 0 07435 04822 03972 07425 0 If we try to use the procedure from the previous example7 we run into a problem right off the bat Column 1 cant be in control of e1 because its rst component is zero To handle this situation7 the LU factorization has to be modi ed to the LUP factorization The additional ingredient is a permutation matrix P7 which permutes the rows of A so that PA has a basic LU decomposition The factorization looks like this PA LU 1 You can get this from Matlab by using the In command With the matrix A from above7 enter this gtgt LUP 1uA This will store the matrices L7 U7 and P that are used in this factorization7 which you can peruse at your leisure ln fact7 the LUP factorization is typically used even ifthe rows of A dont have to be permuted The technical reason is for numerical stability 7 but the intuitive idea is that a column should control the element corresponding to the largest magnitude value it holds 5 21 Detailed example This makes things a little more dif cult but the idea that still works isn7t too different Instead of having Column 1 control e1 we7ll have it control a dif ferent element Technically we could get away with anything that7s nonzero but it probably makes the most sense to go with the largest component In this case that7s the third one Just like last time well do all of our modi cations sandwiched between two sets of commands First following the commands to set up A from above L A U eye4 And last ClC L U LU LU A We7ll start with Column 1 Like I mentioned a moment ago its third component is largest so well give this column control over e3 This means we need to scale it down by its third component to place a l in this position 00 Column 1 controls e3 U11 L31 L1 L1 L31 Note that the value L3 1 has to be stored in U before modifying the column of L Next comes Column 2 Since Column 1 controls eg the rst thing I have to do is use Column 1 to delete the third component of Column 2 00 Column 2 U12 L32 L2 L2 L1L32 Then looking at what7s left the rst component of L2 is the largest This means that Column 2 will control el and I need to scale the rst entry to 1 1711 also update my comment line to 00 Column 2 controls e1 6 U22 L2 L12 L 2 L12 Now for Column 3 First I use Columns 1 and 2 to cancel the third and rst components in that order 00 Column U13 L3 U23 L3 3 controls e2 L33 L3 L1L33 L13 L3 L2L13 With what7s left the second component is the largest So Column 3 will control eg l7ll append controls e277 to my comment line and then scale this entry to l U33 L3 L23 L3 L23 Last comes Column 4 There7s only one element that7s uncontrolled e4 So this one is pretty straightforward I use Columns 1 through 3 to cancel their respective elements and then scale the 4th entry to l 00 Column U14 L4 U24 L4 U34 L4 U44 L4 4 controls e4 L34 L4 L1L34 L14 L4 L2L14 L24 L4 L3L24 L44 L4 L44 After going through all these steps here7s what my output looks like L 0 01210 10000 05074 10000 0 0 00056 10000 0 0 0 0 04316 07613 10000 09503 00407 0 07435 0 08725 02331 03922 0 0 08431 00945 0 0 0 06185 LU 0 08725 02331 03922 01150 0 08417 01823 09503 00407 0 07435 04822 03972 07425 0 A 0 08725 02331 03922 01150 0 08417 01823 0 9503 00407 0 07435 04822 03972 07425 0 l7m in good shape in that LU A but in bad shape in the sense that L isn7t lower triangular The trick now is to switch the order of the elements so that Column 1 controls the rst element7 Column 2 controls the second7 and so on This can be done with a permutation matrix P This is the same as any old linear transformation 7 I say what I want it to do to the standard basis vectors7 and this de nes the matrix l7ll lay out what P needs to do7 and then write the code to create it 1 Column 1 is currently in control of eg However7 it needs to be in control of e1 So Pe3 e1 2 Column 2 is in control of e1 but it should be in control of eg So P61 62 3 Column 3 is in control of eg but it should be in control of eg So P62 63 4 Column 4 is in control of e4 and this shouldn7t change So Pe4 e4 And here7s the code that uses these four conditions to generate P X Define the permutation matrix P P zeros44 Pe3 e1 P3 1 0 0 0 Pe1 e2 P1 0 1 0 0 Pe2 e3 P2 0 0 1 0 Z Pe4 e4 P4 0 0 0 1 a a a Before setting up the permutation matrix7 I had LU A but L wasn7t lower triangular If I multiply both sides by P7 then PM have PLU PA And I expect PL to be lower triangular since its columns will control elements 1 through 4 in order So PM just replace L with PL7 and display PA instead of A PM do this by modifying the last four lines of my code L PL U LU LU PA PA Now my output is this L 10000 0 0 0 0 10000 0 0 01210 00056 10000 0 05074 04316 07613 10000 U 09503 00407 0 07435 0 08725 02331 03922 0 0 08431 00945 0 0 0 06185 LU 09503 00407 0 07435 0 08725 02331 03922 01150 0 08417 01823 04822 03972 07425 0 PA 09503 00407 0 07435 08725 0 2331 03922 01150 0 0 8417 0 1823 04822 03972 0 l qgt to n 0 Sure enough7 LU PA Note that L was replaced with the product 13L7 so that the new L is lower triangular However7 A isn7t overwritten 22 Summary The LUP factorization of a matrix A satis es IL4 Lil 2 The three ingredients are 1 A unit lower triangular matrix L 2 An upper triangular matrix U 3 A permutation matrix P7 which permutes re orders the rows of A Partial Pivoting Idea Swap rows at each step of Gaussian elimination to place the largest element on the diagonal Swap rows H Ek where 6110 is the largest element below or on the diagonal in the 1 column ie ak maxajj 21M Then proceed with Gaussian elimination 1 6 2 0 3 10 2 10 1 1 1 6 1 1 3 1 6 1 1 3 2 1 1 0 4 gt 2 1 1 0 4 E1HE4 10 2 10 1 1 1 6 2 0 3 Why Swapping rows reduces possibility of rounding errors or poor scaling contami nating the solution Example Partial Pivoting 178 243 522 214 1700 ZZZ 243 th xxx 2 4 E1 H E2 E1 H E3 14l2 E1 H E3 E2 H E3 E3 E2 H E3 E2 E3 871853835838 3 B412 3 12B4 3 12M 342 225 41223242432240 400400400 Scaled Partial Pivoting Idea Swap rows at each step of Gaussian elimination to place the element with the largest value relative to the rest of its row on the diagonal In column 1 choose a new pivot akl such that aji maXaji7aji17aji2739 7ajN is largest for j k Then swap rows E Ek and proceed with Gaussian elimination 1 6 2 O 3 2 1 1 O 4 1 6 1 1 3 1 6 1 1 3 Examp39e 2 1 1 0 4 gt 1 6 2 0 3 E1HE3 5210 11 5210 11 Not widely used very expensive in terms of computational cost Triangular Factorization LU Decomposition Gaussian elimination has another interpretation as a factorization of the nonsingu lar coefficient matrix A into two triangular matrices L a lower triangular matrix and U an upper triangular matrix ALU U11 U12 U13 U14 1 0 0 0 U11 U12 U13 U14 U21 U22 U23 U24 m21 1 0 0 0 U22 U23 U24 U31 U32 U33 U34 m31 m32 1 0 0 0 U33 U34 U41 U42 U43 U44 m41 m42 m43 1 0 0 0 U44 lfA is nonsingular and Gaussian elimination on it requires no row interchanges then ukk 7E 0 for all k Ax b gt LUx b Let y Ux Then Lyb and Uxy The solution is divided into two steps yl 2 Forward substitution mlzy 1 yz m31y1 m32y2 y3 m41y1 quotmm m43y3 M and u11961 u12362 u13x3 u14x4 uzzxz u23x3 u24x4 u33x3 u34x4 u44x4 Back substitution More LU Decomposition You can interpret Gaussian elimination as a series of matrix operations on the coeffi cient matrix A Combine the rows to eliminate 621 and a31 all 6 12 6 13 1 O 0 all 6112 6113 a l l M1AA1 1 0 6 21 6 22 6 23 0 agz g ai O l a a a 1 1 31 32 33 O 632 6133 all Combine the rows to eliminate a32 5 3 81 015 08 0215 2 l 2 0 g 1 O 6132 6133 O O 6133 1 0 0 MA Mlt2gtMlt1gtAZU a1 0 0 1 1 3 1 0 A M lMAM 1ULU 1 0 1 a 1 L 631 0 1 0 1 631 3 12 1 L M l M1 1M2 l a agl a Example Find the LU decomposition of l 3007 2 266 1 112 K A M 1M1 1M2 1 M2M1gtA U then L IfMA The Permutation Matrix P Example Perform an LU decomposition on 1 2 6 A 4 8 1 2 3 5 First compute the first column of L 1 O O 1 2 A O 1 O 4 8 1 O O 1 2 3 1 O O 1 2 6 4 1 O O 0 25 E2 4E1 gt E2 2 O 1 O 7 17 E3 2E1 gt E3 We have a zero pivot We can t proceed with straight LU decomposition In order to swap rows we need a permutation matrix P Take 1 0 0 P 0 0 1 0 1 0 100 12 126 PA001 48 1 235 010 23 48 1 100 126 010 235 001 48 1 100 12 6 210 0717 LU 401 00 25 Therefore 100 12 6 L 210 UO717 401 00 25 Remember that since PA LU then PAx LUx Ly Pb Don t forget to apply the permutation matrix to the right hand side as well Theorem If A is nonsingular there exists a permutation matrix P such that PA LU Note that P is nonsingular and that detP i1 A permutation matrix interchanges rows or columns of a matrix or vector eg O 1 O O 1 O O O P O O O 1 O O 1 0 Example 0 1 O 0 b1 b2 10 O 0 b2 191 Pb O O O 1 b3 b4 0 O 1 0 b4 b3 A MATLAB Introduction for AMath 352 by Jonathan Claridge April 3 2008 1 Introduction In my completely unbiased opinion MATLAB is the most awesome program in the universe Oh lucky ye who are about to learn it My aim with this introduction is just to get you moving with the basics and hopefully enough to get you through the rst homework for AMath 352 MATLAB is enormous and the documentation isn t always easy to read for new users so please ask questions as much as you need to My email is onathuwashingtonedu and I check it really often I m happy to look over pieces of your code help you gure out how to use commands etc Throughout this intro I ve written MATLAB variables MATLAB com mands and so forth in typewriter font Anything on a separate line that begins with gtgt indicates something to enter at the command line Anything on a separate line without gtgt is either output from a command I just entered at the command line or part of an m le I m telling you to write I feel guilty not including if statements here but this thing is already way longer than I wanted it to be If they give you trouble let me know 2 Basics 21 Command line The simplest way of entering commands into MATLAB is by using the com mand line Here are a few commands that will make a plot of sinm for 0 g x g 27139 First gtgt x linspace0 2pi 50 de nes x to be an array of 50 points from 0 up to 27139 Next gtgt y sinx de nes y as an array with the value of sine evaluated at each point in x Finally gtgt plot x y plots x on the horizontal axis and y on the vertical axis The end result is a pretty nice sine plot 22 m les An m le is a list of commands that MATLAB executes all in a row You can do the same thing I did to plot sinm in the last section with an m le by following these steps 1 Open a new m le There should be a button right underneath the File menu for this This pops open an editor window 2 Enter these lines in the editor Z Define the xcoordinates of points to plot x linspace0 2pi 50 Z Define the ycoordinates of points to plot y sinx Z Plot y versus x plotx y Everything that follows a Z sign is a comment7 and won t affect your output But write it anyway Why Because I m in charge here 3 Save the m le as plotsinm 4 Run the m le by typing plotsin at the command line Or if you d rather7 there s a Run button near the top of the m le editor Or7 pressing F5 should do the same thing as the Run button 23 A quick exercise Here s a quick exercise to play around with the m le you just made 1 Replace linspace0 2pi 50 with linspace0 2pi 10 Now run the m le How does the plot look different than before Why do you think that is 2 Now change the command plotx y to plotx y o Does this show you anything about what s going on in the last step 24 Comments Just like in the m le you wrote above7 you make a comment by beginning a line with the Z sign Even though they don t affect what MATLAB does7 comments are really really important The better you can put what you re doing into plain English the better you ll be able to gure out what does and doesn t work It also makes it tons easier to read your homework and to give you partial credit if you make a mistake 25 Documentation MATLAB has loads of built in documentation You can use help to see the documentation for a command in the command window Alternatively you can use doc to see the documentation in a separate window Personally I like doc way better Try these out for quad which you ll be using in the homework gtgt help quad gtgt doc quad The documentation usually has a bunch of stuff for more experienced MATLAB users so don t worry when you don t understand all of it If you have trouble doing what you need to even after checking the documentation feel free to ask questions 3 Arrays 31 Array basics Arrays are how MATLAB stores numbers In this class we re pretty much going to stick with 2 dimensional arrays aka matrices which you d pic ture as organized into rows and columns A vector is an array with only one row or only one column so it s just a single string of numbers Just to check out some basics of arrays make a randomly generated array by typing gtgt A rand35 at the command line This gives an output something like A 06557 09340 07431 01712 02769 00357 06787 03922 07060 00462 08491 07577 06555 00318 00971 The numbers are randomly generated so you won t get the same numbers Hal Suckers But you ll get the same shape The dimensions of this array are 3 x 5 which is to say that it has 3 rows and 5 columns If an array gets really big so you can t just count its dimensions you can have MATLAB show you by typing gtgt sizeA You can also access individual elements of the array Typing gtgt A24 shows me the element in row 2 and column 4 For the array I wrote above I get ans 07060 32 Manual entering arrays You can enter an array manually like this gtgt B 1 2 3 4 5 6 7 8 Each space separates two elements in the same row and a semicolon moves down to the next row So the output is 1 2 3 4 5 6 7 8 33 Shortcuts for entering arrays There are a lot of commands for making common types of arrays without entering them manually Try out the following and see what you get gtgt 110 gtgt 5z1z2 gtgt 128 gtgt ones52 gtgt zeros33 gtgt linspace025 Which of these are vectors 4 Componentwise operations The basic arithmetic operations are addition subtraction multi plication division and quot exponentiation The last three of these have special versions and quot that you need to use whenever you want componentwise operations on arrays For instance if x and y are vectors of the same size xy is a new vector with rst element x1y1 second element x2 y2 and so on Conveniently you never have to worry about this for and so and aren t even de ned Who cares about componentwise operations you might ask Mostly it matters for plotting things Create a new m le and enter the following code 70 Define the xcoordinates of points to plot x linspace0450 7 Define the ycoordinates of points to plot y xquot2 7 Plot y versus x plotx y Save it as plotsquarem and try running it Doesn t work does it Now replace xquot2 with xquot2 and it should run just ne The reason is that the y coordinates we want to plot come from squaring the individual elements of the vector x When MATLAB sees xquot2 it tries to square x using matrix multiplication which isn t what we wanted to do The common built in functions like sin cos and exp automatically work componentwise 5 for loops If you ve programmed before for loops aren t going to be much different than what you ve seen before If you haven t programmed before well for loops are still pretty easy to get the hang of Create a new m le called looptest m and enter these lines for i1le 70 Display the value of i in the command window dispi end Run it and see what you get in the command window Hopefully this is pretty self explanatory You can also play around with the range of values that i ranges over In the script you just wrote 110 is MATLAB 7s shorthand for the list of numbers 1 23 10 You can replace that with any other vector and i will range over the values in that vector instead Try out these modi cations 1 Replace i110 with i10 11 2 Now try i11510 3 Now try ipi 1 exp1 6 Functions MATLAB has two different kinds of functions Anonymous functions or inline functions but I think anonymous is what the of cial documenta tion uses are de ned with the symbol and you can make them directly at the command line These are great for simple one line de nitions Mean while m le functions are de ned in their own separate m le 61 Anonymous functions Enter gtgt f x xquot2 at the command line to de ne the function x m2 Then try entering these commands gtgt f2 gtgt x linspace04 gtgt plotx fx You can also de ne functions of several variables this way For example try gtgt g xy x yquot2 gtgt g53 62 m le functions An m le function is de ned in its own m le These are perfect when your function de nitions get really long and tting them onto a single line is an enormous pain They re also the way to go if you re using functions in a more traditional programming sense rather than a mathematical sense Just to illustrate though let s rewrite x 2 as an m le function Open a new m le and enter the lines function y fmfilex y x quot2 Save this as fmfilem Now go to the command line and try these commands gtgt fmfile2 gtgt x linspace04 gtgt plotx fmfilex You should get the exact same output you did using the function f de ned in the last section 63 A difference between the two types of functions There s one big difference between anonymous functions and m le func tions which kicks in when you re passing these to other functions Say I want to have MATLAB compute 3 M dz 0 where x m2 like I ve been using MATLAB has the function quad to do this short for quadrature Using the anonymous function f that I de ned earlier I can calculate this integral with the simple command gtgt quadf 0 3 On the other hand if I want to use the m le function fmfile l have to enclose its name in single quotes the one next to the Enter key not the funky one in the upper left of the keyboard gtgt quad fmfile O 3 Try these out and see that you get the same answer Compare the value MATLAB gives you to what you get if you work out the integral by hand 7 Plotting You ve already seen some examples of plots and the plot function is pretty easy to use Now let s write an m le that does something a little more complicated Say I want to plot x 2 for 73 g x g 3 along with gz zz along with the horizontal line y 0 that separates the two of them Open a new m le and enter this code Z Define the function fxxquot2 f x xquot2 Z Make a vector of x values from 3 to 3 x linspace33 Z Plot fx versus x plotx fx Z Hold the current plot so we can add more stuff hold on Z Plot fx versus x plotx fx Z Plot the horizontal line y0 from x3 to x3 plot 3 3 O 0 Z Undo the hold on the plot hold off Save this as plottest m and run it Neat huh Now add these lines to your m le Z Label the xaxis xlabel x Z Label the yaxis ylabel y Z Make a title title My spiffy plot Run it again and admire the results Ooh pretty To save your gure go to the File menu in the gure window and select Save As 7 Select your favorite le format I suggest either Portable Network Graphics PNG or JPEG save and you re good to go AMATH 3527 LECTURE 4 JUNE 307 2008 1 A glance at orthogonal projections There7s one last kind of linear transformation transformation 1 want to look at The orthogonal projection onto V7 which l7ll call Pv7 takes a vector u and projects7 it onto V as in this picture The idea is to split at u into two parts One part that7s parallel to V which is Pvu7 and one part that7s perpendicular to V which l7ve labeled Vi You can look at Pv as a scaling that scales by A1 1 in the direction parallel to V7 and by A2 0 in the direction perpendicular to V So if I wanted7 I could gure out what the matrix for Pv ought to be7 based on the way I computed scalings in the past However7 orthogonal projection is special enough to deserve its own treat ment The key part of this is the inner product7 which is really important in and of itself Enough so to deserve its own section in these notes 2 Inner products 21 De nition and discussion De nition Let u and V The inner product or dot 2 2 product of u and V is u V mm Uzi2 1 At the moment l7m only looking at this in R2 But the same de nition extends easily to vectors in higher dimensions For instance in R3 the inner product is U1 1 1 uz 112 mm mm U303 2 Us 113 The inner product is really important for getting a handle on geometry It does two things above all else it de nes length and it de nes orthogonal ity Orthogonal7 means the same thing as perpendicular7 for all practical purposes In R2 we have ideas like the Pythagorean theorem and angles de ne length and orthogonality ln R3 we can use the same ideas but it gets a lot harder so the inner product is already useful As soon as you move on to R4 vectors consisting of 4 real numbers and higher dimensions who knows how we should even think about geometry At this point the inner product is all we have left to go on 22 Length De nition Then length of a vector also called its magnitude or norm is denoted It is de ned by M VVV 3 This de nition of length is a bit more abstract than you7re probably familiar with But in R2 it works out to the exact same thing you7re used to based on the Pythagorean theorem 1 Say that V If you were going to calculate the length of V with 2 the Pythagorean theorern you7d draw a picture like this vi The length of V is the length of the hypotenuse of this right triangle So Hvllz ii ig 4 Taking the square root of both sides llvll w 22 5 Now lets see how I compute the length using the inner product The rst thing I do is calculate V V which is VV in v1 71 1 6 12 12 Then using the way I de ned length at the beginning of this section HUHVVVv v 7 Its the same thing With a bit more work you can do the same kind of reasoning in R3 And in higher dimensions the inner product gives us a way to think of length even when there7s no picture to fall back on 3 23 Orthogonality De nition The vectors u and V are orthogonal if u V 0 When u and V are vectors in R2 this works out to U101 11202 Now if l7m going to throw out an equation like this and say that it de nes orthogonality l7d better make sure it agrees with the way l7m used to thinking about when things are perpendicular As a starting point let7s check out a couple of examples Example Consider the standard basis vectors el and eg These are de nitely perpendicular to each other And if I take their inner product 1 0 Its zero Sweet My de nition of orthogonality makes sense here D Example Consider the standard vectors u and V Draw yourself a picture 7 they7re perpendicular And if I look at their inner product uV 711171110 10 Again my de nition of orthogonality comes through D You can actually look at this more generally without too much work U1 1 1 Take two vectors u and V If you think in terms of lines each of these has a slope determined by riserun So the slope of u is u2u1 and the slope of V is 112111 If u and V are perpendicular then the slope of V must be the negative reciprocal of the slope of u So 1 2 U1 11 U1 U2 Cross multiplying the denorninators7 U202 full1 And this rearranges neatly to U101 U202 3 Orthogonal projections For real this time With inner products under my belt7 orthogonal projections are a piece of cake Here7s a quick recap The orthogonal projection onto V7 which l7ll call Pv7 takes a vector u and projects77 it onto V as in this picture I want to gure out what Pv does Since Pvu is parallel to V7 1 know l7ll wind up with Pvu av 14 for some scalar a The trick is nding a The rst thing PM do is split u up like this u av Vi 15 lf 1 take the dot product of both sides with V I get uV OLVVVLV 16 av V 17 since Vi is perpendicular to V This gives me u V 7 18 V V And this tells me that u V Pv 7V 19 V V This perspective on projections is really really important 4 When is a transformation invertible The other thing I want to do in this lecture is gure out how to reverse engineer a transformation That is say you give a transformation speci ed by a matrix like T g g lt20 lf 1 see what this does to a bunch of vectors l7d strongly suspect that its scaling in two different directions So its like one of these scaling transfor mations we7ve been building The trick is guring out how to get at the scaling factors and scaling directions The most important mathematical issue behind this is somewhat cu riously knowing when a transformation is invertible Or perhaps better knowing when a transformation is not invertible There are two different ways of looking at this both of which are essential 41 Invertibility and the zero vector The zero vector denoted 0 is a vector with all 07s as its components So in R2 0 g lt21 6 If I multiply the zero vector by a matrix I get the zero vector back a b 0 0 lo l lol lol lt2 This provides a nice way to look at invertibility Fact If T is invertible then 0 is the only vector that it maps to 0 That is TV 0 if and only if V 0 Proof Suppose that TV 0 23 Then multiplying both sides on the left by T l v T 10 0 24 The best way to understand this is to look at something that isn t invert ible For example look at the orthogonal projection onto a vector V I used Vi to refer to the direction perpendicular to V And VLV PVVL V 0V 0 25 V39V So Pv maps Vi to the zero vector Consequently Pv is not invertible 42 Determinants Let M Z b Then the determinant of M is d detM ad 7 b0 26 In general determinants are only de ned for square matrices The de nition for larger matrices is based on a recursive formula For now though I want to look at some speci cs of the 2 gtlt 2 case 43 Determinants and inverses The main importance of determinants is the following fact A square matrix M is invertible if and only if detM 31 0 This is true for any size matrix For 2 gtlt 2 matrices7 its easy to see how this is related to the inversion formula If M Z b 7 then d 1 d 7b 71 M adibcl 27 lf detM 07 then this formula doesn7t even make sense7 corresponding to the fact that M isn7t invertible For an n gtlt 71 matrix M7 there a more complicated inversion formula This also depends on 1detM 5 Finding scaling factors 51 The general setup With these two perspectives on invertibility gured out7 l7m in a good place to reverse engineer a scaling Say l7m looking at a linear transformation T R2 a R2 that someone else has given me I want to nd out if I can think of it as scaling in two different directions Determinants7 and their relation to inverses7 gives me a way to do this Suppose that A is a scaling factor for T This means that there7s a vector V such that TV AV 28 That is7 V provides the direction in which T scales by A I can rearrange 28 to say that TV 7 AV 0 29 And I can rewrite this equation as T 7 ADV 0 30 This tells me that T 7 AI cant be invertible So for A to be a scaling factor7 I must have detT 7 AI 0 31 The nice thing about this condition is that it doesnt depend on the scaling direction at all lfl have a matrix T7 then this gives me a purely algebraic condition to nd the scaling factors After I do that7 its a lot easier to nd the scaling directions For each scaling factor A7 I can just manually look for the vector V such that TV AV 52 An example Suppose that 4 3 T 3 5 32 The rst thing I do is calculate 4 3 1 0 3 4 1 33 4 7 A 3 3 H 34 Then to nd which A actually give me scaling factors7 I set detT 7 AI 0 35 47A57A733 0 36 2079AA279 0 37 A2 4 9A 11 0 38 From the quadratic formula7 the values of A that solve this equation are 9iv81744 9 V3 A 7 i 7 39 2 2 2 This is a bit messy7 so PM approximate these values A m 45 i 30414 40 So my two approximate scaling factors are A1 14586 A2 75414 41 Now l7ll look for the scaling directions Say that u gives me the 2 rst scaling direction So Tu Alu 42 In terms of the algebra7 its more convenient and conventional to rewrite this as T 7 AIu 0 43 Writing this out in cornponents7 4 7 14586 3 ul 0 3 5 7 14586 112 0 44 25414 3 ul 7 0 3 35414 112 7 0 45 25414u1 3112 7 0 3111 35414112 0 46 lfl look at just the rst component of this7 I get 25414u1 3112 0 gt ul 711805112 47 On the other hand7 if I look at just the second cornponent7 I get 3u1 35414112 0 gt ul 711805112 48 Its the exact same thing If were just thinking about equations7 this may seem a little weird It means that I cant solve for exact values of U1 and u2 But if I think about what l7rn doing geornetrically7 this totally makes sense u just gives me a direction It doesnt matter what values I take for the cornponents7 as long as u is pointing the right way So to make life easy7 PM just set uz 1 This gives me u 711805 49 The transformation T scales by A1 14586 in this direction Finding the other scaling direction is just a repeat of the same process I end up with V 08471 I 50 1 T scales by A2 75414 in this direction 10 AMATH 352 LECTURE 20 AUGUST 11 2008 1 General polynomial tting 11 Method In the last lecture we talked about tting points with a line by solving a least squares problem This generalizes pretty easily to tting points with polynomials of higher degree which allows for a bit more exibility Like last time suppose I start off with data points 1791 2792 9537113 1 leN This time instead of assuming a linear relationship l7ll assume that y is related to x by a polynomial of degree m y m 11 agz 1ng am zm 2 Just like last time my data points give me N different equations only now there are m 1 unknowns 01 a i 13 quot39 am1ln 111 all azxz ang am x yz 2 01 12 13 quot39 am1 n ya 3 a1 czsz as amnx W I need at least as many equations as variables to get a tting polynomial so l7d better have m 1 S N When I write this system in matrix vector form its m 1 1 r 1 11 yl 1 2 3 z 12 yg 1 ma xi 963 as ya 4 1 zN v z amid yN Like I did for line tting l7ll label this as Ma y 5 Then same as last time I solve for a in two steps 1 Factorize M QR 2 Solve Ra Q y with back substitution 12 An example The code below goes through this tting process tting a degree 4 polynomial to a sine curve with some noise thrown in X linspace0 1021 7 rand twister 4016 y sinx 2randsizex 5 plotx y 77 00 Make the matrix M 00 Pick a degree m4 M onessizex X X 2 X 3 X 4 o QR factorization ELK qrMO oo Solve for a a backsubstitute R Q y p 211 a2x a3x 2 a4x 3 a5x 4 hold on plotx p r hold off Note that l have to use X 2 for 2 and X 3 for 3 and so on This is because I want to squarecubeetc the elements of the vector X individually Without the extra dot Matlab would try to use matrix multiplication for the powers and it would get really confused The plot this code generates looks like this 1 k x K if 705 gs 1 r fl 0 1 2 a 4 5 5 7 8 9 10 The t isn7t stellar but its not so bad given that the polynomial only has 5 degrees of freedom Before using a larger polynomial there7s one change that makes this code a lot exible Replace the line de ning p with p polyvalflipuda X The rst argument of the built in function polyval is a list of coef cients that de ne a polynomial The only catch is Matlab7s convention is to take the rst element of the list to be the coef cient of the highest power of x This is a little silly7 if you ask me Saying flipuda does an up and down ip of a7 converting our order into Matlab7s preference The second argument7 X7 is a vector of values where we want the polynomial evaluated With this in place7 the only thing you have to change to use higher de gree polynomials is the line that de nes M For example7 to use a degree 6 polynomial7 you7d say M onessizex X X 2 X 3 X 4 X 5 X 6 Give it a try Its pretty awesome how well it works 2 Functions as vectors For the last few lectures7 we7ll take a look at polynomial approximations of functions of a real variable This topic is really important in numerical analysis for continuous functions7 which is a pretty huge topic Continuous function show up all over the place in mathematical models usually in the form of differential equations7 and guring how to approxi mate these using discrete math on a computer is a bit tricky Polynomials are one of the only types of continuous functions that computers handle really well7 so they7re essential to this type of analysis There are a couple of major reasons for looking at this in a linear algebra class7 as well Namely o This topic produces a lot of linear systems of equations 0 Functions are vectors in their own right The second reason is the most interesting7 and if you7re not used to it7 perhaps a little odd First off7 consider a long vector say7 something in R7 Trying to picture this as an arrow in 7 dirnensional space isn7t terribly pro ductive However thinking of it as a function well we7ve kind of tricked you into doing that already If you wanted to get at the individual components ofthis vector in Matlab you7d say v1 v2 and so on As a function V 1 2 7 7 R with V1 3 V2 72 V3 1 V4 5 etc 7 Each component of V just provides one of its possible outputs There7s also a decent option for plotting V PM just make an s axis for its possible inputs and plot each output as a point 9 4 is 2 as 4 Vs 2 as 74 4 72 o 2 4 5 8 10 A full blown function of a real variable f R 7 R just gets points plotted for all possible inputs lt7s kind of like a column vector but instead of a discrete number of entries it has a full lines worth 7 one for every real number Also the main component of linear algebra 7 linear combinations 7 makes just as much sense for functions of a real variable as they do for standard vectors lf 1 take two functions fg R 7 R then for any real numbers 04 and B the linear combination of 69 is a function as well For example if f cosx and g em then 2f 3g 2 cos 36x 8 5 f gm ecosltzgt lt9 The set of all functions from R a R as it turns out is a proper vector space There are some technical details that go into this but the fact that this set is closed under linear combinations is the main component In practice the set of all functions f R a R is way too general to work with These functions can do all kinds of weird things For instance you can de ne a function that7s 1 at the rational numbers and zero every where else There are also functions that are continuous everywhere but not differentiable anywhere For rnodeling purposes were usually interested in various levels ofsrnooth ness of a functions where srnoothness relates to continuity and differentia bility Conveniently these sets forrn subspaces of the set of all functions from R a R The most common ones are C0 all continuous functions C1 all continuous functions with 1 continuous derivative C2 all continuous functions with 2 continuous derivatives C all in nitely differentiable functions For cornputational purposes the most interesting subspaces are of the form lP m all polynomials of degree 3 That7s because all the interesting things you can do with polynomials are straightforward to compute Calculating values nding derivatives nding integrals and probably anything else you can think of So instead of trying to work with continuous functions directly we7ll try to approximate them with polynomials and do our cornputations with those polynornials In the last couple lectures l7ll build up this idea and maybe see if we can nurnerically solve a differential equation in the last lecture AMATH 352 LECTURE 19 AUGUST 8 2008 1 Least squares problems 11 An overdetermined system The topic for today is overdetermined systems of equations 7 that is systems with more equations than variables For example here7s a system of three equations I obtained at home 008129 248 H701162g 0 H70127Og 0 I got the rst equation from my dining room table after measuring its height and nding the amount of time it took a coin to fall to the oor using h gtz The second and third equations are from my kitchen counter whose height H I didn7t measure Presumably these three equations should let me estimate H along with the gravitational acceleration g The problem is this system has no exact solution Its pretty easy to check that I can solve any of the two equations individually but as soon as I include a third everything breaks down Also if I try to solve the second and third equations I get H 0 and g 0 which is worrisome in its own right Instead Id like to nd values of H and 9 that approm39mately solve this system as best as possible From the matrix vector form of the system Ax b this is done in two steps 1 Factorize A QR 2 Solve RX Q b This is a little odd since that7s the same way I7d solve an invertible system using the QR factorization The reason why this works is the main topic for today7s lecture 12 Why the QR factorization works l7ll go through the details of why the QR factorization gives a good approx imate solution for my example problem The details for larger systems are similar7 just with more variables And the procedure for getting the end result doesn7t change at all Consider abstractly the QR factorization of the matrix A The columns of Q are obtained by orthonormalizing the columns of A from left to right Since the columns of A form a basis for RangeA assuming that they7re linearly independent7 which they de nitely are in this case7 the columns of Q are an orthonormal basis for RangeA This means that I can write b as b Q1Qf1b Q2Qf2b E 1 where E is orthogonal to both Q1 and Q33 This is the exact same idea that I wrote earlier in the course as 390 011q1 b q2q2 b E 2 The only difference here is that my q7s are the columns of Q7 and l7m using Qilb to denote the dot product Q1 b Id like to solve Ax b But the vector E above is out of my league lt7s perpendicular to RangeA7 and there7s just nothing I can do about it The best I can possibly hope for is to match the rst two parts in the expansion of b Namely7 AX Q1Qf1b Q2Qf2b39 This part 7 throwing away E so that I can solve what7s left over exactly 7 is the substance of solving the overdetermined system The main point is that I forced the part I threw out to be orthogonal to RangeA7 and the columns of Q provided a convenient way to do this From here7 l7m solving an exact system The right side of 3 is just a linear combination of the columns of Q And lf I replace Ax on the left side with QRX7 then the left side is a linear combination of the columns of Q as well The coef cients in this linear combination are provided by the vector y Rx This is similar to substituting y UX when using the LU decom position to solve a system When I plug this into 37 I get Qy Q1Qf1b Q2Qf2b39 2 Writing out the left side in a bit more detail7 91 oz Q1Qf1b Q2Qf2b 5 And breaking up the left side in terms of the columns of Q7 1 get Q4111 292 Q1Qf1b f Q2Qf2b 6 The only way this can be true is if the coef cient of Q on the left matches the coef cient for Q on the right7 and likewise for Q3 This is true because Q1 and Q are linearly independent So I must have 91 Qflb 39gt 7 12 Qf2b This tells me what y is7 so now I can go back to the system RX y to nd x The one nal trick is to rewrite this as Q71b Q71 Rx 39gt 39gt b b 8 loazb of Q lt This last step comes from a row based perspective on matrix vector multi plication I havent emphasized this nearly as much as the column based perspective7 mostly because this is the only place we7ll use it PM include a section on that at the end of these notes7 in case you7re interested In the meantime7 take it on faith That7 and the fact that if matrix vector multiplication is sensible7 I should be able to factor out the b like I did above Q b Q 1 1 legal le2l b39 9 13 Solving my sample problem To nd an approximate solution to my sample system of equations7 I run these commands in Matlab gtgt A O 0812 1 1162 11270 gtgt b 248 O 0 gtgt ISLE qrAO gtgt X backsubstituteRQ7b X 36813 302741 The rst two lines set up A and b for my system The third line performs the QR factorization Without the 07 the qr program does something slightly different than what we7ve seen Finally7 the last line gets the approximate solution by using a home built program for back substitution The end result here says that my kitchen counter is about 368 feet high7 which seems pretty reasonable if you7ve spent much time in my kitchen And it says that gravitational acceleration is about 3027 ftsZ7 compared to the actual value of 32 Not too shabby for a home brewed physics experiment 2 Side topic Multiplication by Q When Q has orthonormal columns7 multiplying a vector by Q has a special meaning lf Q is r gtlt d7 then Qii Q22 lQZdl 39 Suppose that V is a column vector in R7 so its of the same size as the columns of Q Then I can break V down as V Q1Qf1v i Q2Qf2v i i QdQfdV i E7 where the nal vector E is orthogonal to all the columns of Q The numbers multiplying Q34 Q527 and so on have a name of their own The number Qalv indicates how far I should go in the direction of Q1 if I want my end result to be the vector V As such7 this is called the component of V in the direction of Q51 Similarly7 Qfgv is the component in the direction of Q32 And so on When I multiply V by Q7 1 get Q Q1 Q2 39 39 39 Qd 7 Q Q3 QEJV QV Q2 V Q2V 12 Qi QidV This vector is a list of the components of V in the directions of the columns of Q 3 Side topic Why is it called least squares With the approach l7ve taken to this approximate solution business7 the name least squares problem77 seems a bit strange Say l7m looking at an overdetermined system Ax b7 where A is an r gtlt d matrix For a given input K7 the residual is de ned by s b 7 AX This provides a measure of how good7 an approximate solution is The smaller is7 the closer Ax is to the desired output b The approach we7ve used minimizes HsH7 as it turns out And is provided by the formula HsH1s s S 13 The word least7 refers to the fact that l7m minimizing this quantity And the word squares7 refers to the squares of the components of s in the length formula 4 Line tting Solving least squares problems opens up the door to the wonderful world of curve tting Seriously7 this stuff is freaking awesome Say I have a set of data points 1791 2792 9537113 14 Nin that l7 say7 got from some sort of experiment Here x should be the inde pendent variable7 while y should be the dependent variable So7 for example7 perhaps 0 z time elapsed during a chemical reaction7 y amount of product formed 0 z time from detection of a tumor7 y size of the tumor However it works the main thing l7m interested in is how y is related to x By itself z is pretty boring You might notice that x can refer to time This is a little bit unfortunate due to other conventions lt7s painful I know but unfortunately its a bit traditional too Anyhow suppose that I think there7s a linear relationship between z and y So y m 11 12 15 for some values of a1 and 12 Its more common to start off with 10 but Matlab makes me start indexing a vector with 1 and eventually l7ll want to use even more values lf I assume this relationship holds then my N data points give me N different equations 01 02 11 01 02 732 01 12 733 16 11 a N yN Before I write this in matrix vector form I should emphasize one thing The unknowns 7 the things l7m trying to solve for 7 are al and 12 Meanwhile each s and y represent known values This is a little weird and takes some getting used to In matrix vector form this system is 1 1 11 2 a 92 z 1 3 a2 93 17 11 ml 1le Since my letters are all screwed up now l7ll refer to this as Ma y 18 And now I can nd coef cients to t my data by following an easy two step plan 1 Factorize M QR 2 Solve Ra Q y To see this in action grab the program noisyline mthat7s posted along side these lecture notes By entering gtgt xy noisyline100 you7ll generate column vectors X and y that provide 100 points that lie ap proximately along a line If you plot them using gtgt plotXY you7ll get something like this To produce the matrix M for this system of equations 1 can enter gtgt M onessizex X The ones command simply produces an array of all 17s which is useful more often than you7d think From here nding a line to t these points is the simple two step proce dure I mentioned before Factorize M QR and then solve Ra Q y gtgt QR qrMO gtgt a backsubstituteRQ y After doing this7 the vector a stores the coe icients for the line y a1 a2x To plot this line in red7 I enter gtgt hold on gtgt plotX a1a2X 71quot And suddenly I have this lovely picture Pretty sweet7 huh AMATH 3527 LECTURE 3 JUNE 277 2008 1 Mathematical concepts of the day 11 Matrix representations in general In the last lecture7 I found the matrix representation for a rotation by looking at what it does to the standard basis vectors7 el and eg The columns of the matrix were determined by what happened to these two vectors This isnt particular to rotations You can do this to get a matrix rep resentation for any linear transformation As a result7 matrices and linear transformations are often used interchangeably And 1711 probably start do ing just that before too long Here7s a theorem that sums up this relationship Theorem Every linear transformation T R2 a R2 has a matrix represen tation Proof Consider any vector in R2 V Then 2 V Ulel 11262 1 Using the fact that T is linear7 I get Tv Tv1e1 we 2 U1Te1 2T62 C40 Now let 21 t22 tn Tel7 tn Te2 4 17m just giving a name to the components of Te1 and Te2 Plugging these into 37 I get T V 7 2 02 2 5 h m 1 This tells me that T 11 21 is a matrix representation of the transfor 21 22 mation T D The biggest thing to take away from this is that the columns of T are simply Te1 and Te2 12 Inverse of a transformation Say l7m looking at a transformation T R2 a R2 Then the inverse of T denoted T l is the transformation such that T 1Tv TT 1v v for all v e R2 7 Less technically T 1 is the transformation that undoes77 T Likewise if I apply T l rst then T undoes77 T l Not all transformations have inverses A little later on well talk about what exactly happens when a linear transformation doesnt have an inverse 13 Inverse of a matrix Say that M is a 2 gtlt 2 matrix Then the inverse of M denoted M l is the 2 gtlt 2 matrix such that M lM MM1 s The matrix on the right hand side is called the identity matrix and its usually denoted I Its easy to show that IV V for all V E R2 2 Scaling in arbitrary perpendicular direc tions Last lecture we talked about using linear transformations to rotate vectors in R2 We also talked about scaling in the horizontal and vertical directions But what if we want to scale in two directions other than the horizontal and vertical This is trickier To make life a little easier lets start off by saying the two directions in which we want to scale are perpendicular to each other There7s a nice trick in this case 21 An example First an example Say I want to scale by a factor of 2 in the direction of f1 E and by a factor of 13 in the direction of f2 11 These directions are the horizontal and vertical rotated by 7r4 l7ll call the transformation that does this T and PH call it7s matrix representation T In principle I could nd this transformation by doing a bit of algebraic slog But instead l7ll break this transformation up into three steps H Apply R7r4 to rotate clockwise by 7r4 This rotates the directions of f1 and f2 to the horizontal and vertical respectively 3 Apply a scaling transformation S which scales by a factor of 2 in the horizontal and by a factor of 13 in the vertical C40 Apply R to rotate back to my original situation In terms of composing these three functions this says that TRr4OSORW4 9 If l7m working with the matrix representations then the 07s for function composition is replaced by matrix multiplication T Rw4SRiw4 10 22 In general Now l7ll be a little more general about this Say I want to scale in two directions speci ed by the vectors f1 and f2 But all I know is that f1 and f2 are perpendicular l7ll assume that I get f2 by rotation f1 counterclockwise by 7T2 7 if I didnt happen to get them in that order I could just relabel them to make it that way l7ll use A1 for the scale factor in the direction of f1 and A2 for the scale factor in the direction of f2 Now here7s what I do 1 Find the angle 6 by which fl is off of the horizontal 2 Apply the rotation R4 This rotates f1 onto the horizontal and f2 onto the vertical 3 Apply the scaling represented by 8 t 2 4 Apply the rotation R9 to move everything back where it belongs 3 Scaling in arbitrary directions lt7s possible that l7d want to scale in directions that aren7t perpendicular to each other In this case7 life gets more dif cult The big difference is that the angles between vectors dont get preserved For example7 two vectors that are perpendicular to each other will probably not be perpendicular after the transformation is applied But we can still make this work It just involves updating our strategy a little bit 31 An example Say the directions I want to scale in are f1 f2 11 My scale factors are A1 2 and A2 13 The transformation that does the scaling is T7 and its represented by the matrix T Here7s the battle plan this time around 1 Apply a linear transformation that maps f1 to e17 and maps f2 to eg 2 Scale el and e2 appropriately 3 Apply a linear transformation that maps e1 back to f17 and maps e2 back to f2 Step 2 is easy I know how to do the scaling part lt7s represented by S 133 12 This is actually the same strategy I used for perpendicular scalings The only difference is that I cant use rotations to do steps 1 and 3 lnstead7 l have to nd out a different linear transformation that does the trick Step 3 is the easiest place to start For the transformation at that step 17m saying exactly where el and e2 go This should make the matrix repre sentation really easy to nd If F is the linear transformation then F f1 f2 i 51 13 Now step 1 should be a little easier to understand There 17m applying the opposite transformation of F 7 in other words F l The matrix representation of F 1 is F l And to nd this 1 can whip out the general formula for inverting a 2 gtlt 2 matrix 6 ill 10 14 This gives me F71 L 5 1 59 19 15 54 741 74919 Now that 1 know how to implement the transformation at each step 1 can get the matrix for the entire transformation T T FSF l 16 1 311B 133 5199 1131 lt17 217 13040 259 18 AMATH 3527 LECTURE 5 JULY 27 2008 1 Geometry of R3 11 Basics A vector in R3 is a list of 3 real numbers l7ll write these in a column7 so vectors look like V 02 Like vectors in R2 there are two basic actions that tie all vectors together One is addition U1 11 U1 iii uz 02 uz 112 2 Us 3 Us 113 The other is scalar multiplication So if V is a vector and 04 is a scalar real number ill 0401 av Oz 112 04112 3 HQ 0403 Together7 these two actions combine to form linear combinations lf u7 V7 and W are vectors in R3 and 041 042 and 043 are scalars7 then alu 042V 043W 4 is a linear combination of u7 V7 and W Saying R3 is a vector space77 means that a linear combination of vectors in R3 always gives me back another vector in R3 The standard basis vectors in R3 are l 0 0 e1 0 7 e2 1 7 e3 0 5 0 0 l And much like in R2 any vector in R3 can be written as a linear combination of these three vectors 11 V 112 Ulel Ugeg Ugeg 6 113 12 Inner product The inner product of two vectors in R3 is a lot like the inner product of two vectors in R2 U1 11 u V uz 112 mm 11202 U303 7 U3 113 Conveniently the de nitions of length and orthogonality I gave in the last lecture don7t have to be modi ed at all This is why inner products are awesome So the length of a vector is still HVH VVV 8 And two vectors u and V are still orthogonal if u V 0 9 This de nition of orthogonality is really important in R3 since its a lot harder to tell if two vectors are orthogonal than it was in R2 As an example let me pick a couple of vectors that are orthogonal according to the de nition and then see what they look like in a picture Letu 1 andV 72 Then 1 1 uV11172110 10 So u and V are orthogonal If I plot these in Matlab using some custom scripts to get the 3D arrows and rotate the gure appropriately I get this So are they perpendicular 17d buy it 13 Surfaces For What We7re going to 100k at7 surfaces are just a Way to Visualize What happens to Vectors Check out this sphere Each point in the mesh is speci ed by a vector So in this case a vector in R3 is a rule telling Matlab Where to draw a point Matlab also uses some additional data to tell which points are connected by lines But we wont worry about that The thing were interested in are the vectors themselves and how they7re affected by linear transformations 2 Scalings in R3 The rst type of linear transformation in R3 Well look at is a scaling The way we7ve talked about implementing scalings in R2 its actually not that much harder to do them in R3 21 Scaling along the axes The rst thing we7ll do is scale in the x y and z directions This is the easiest type to implement and it7ll form the backbone of any other scalings we do Plus it gives us some pretty pictures to look at Say I want to nd the linear transformation S that scales as follows 0 By a factor of A1 2 in the x direction o By a factor of A2 l in the y direction o By a factor of A3 12 in the z direction So I want S to do the following three things SeL 2e1 scaling in the z direction ll Seg e2 scaling in the y direction l2 Seg eg scaling in the z direction l3 1 1 Now say that V 112 is a generic vector in R3 1 want to nd TV I can use the standard basis3 vectors to write V Ulel Ugeg Ugeg 14 Since S is linear SV S11e1 Ugeg Ugeg l S11e1 S12e2 S13e3 S preserves addition 16 17 CT 1Se1 2Se2 3Se3 T preserves scaling V For the particular transformation l7m looking at I can plug in Se1 Se2 and Se3 to get 1 SV leel Ugeg E0363 18 Everything that S does is determined by what it does to e1 eg and eg The matrix representation for S is a way of bookkeeping these three pieces of information 2 0 0 S2e1 e2 eg 0 1 0 19 0 0 l2 Then matrix multiplication 2 0 0 111 2 0 0 0 1 0 U2 U1 0 U2 1 U3 0 0 0 12 113 0 0 12 2111 O Ug O Ug 0U1U20U3 21 0111 0112 113 Of course I dont need to include all the 07s in the last expression here However this is the way you7d do the matrix multiplication with its more conventional definition I havent talked about the general formula for matrix multiplication yet Its a somewhat convenient way of getting the individual components in a product of matrices but it doesnt tell you much about whats going on geometrically As such I want to keep our focus on the formula 20 The matrix multiplication SV gives me a linear combination of the columns of S weighted by the components of V What does this look like 7 Well lets start off with a sphere The arrows here point in the 95 y and 2 directions which are my scaling directions After I apply S here7s what it looks like 6 1 can see that its gotten longer in the m direction stayed the same in the y direction and gotten shorter in the z direction 22 Scaling in other directions Now say 17m after the transformation T that scales like this 2 o By a factor of A1 2 in the direction of f1 7 3 o By a factor of A2 1 in the direction of f2 0 1 0 o By a factor of A3 12 in the direction of f3 6 The strategy to implement this isn7t so different from What we did in 2 dimensions The only major difference is that there7s another vector oating around Here7s the plan H Apply the linear transformation that sends f1 to e1 f2 to eg and f3 to 63 D Use the scaling S from the previous section 00 Apply the linear transformation that sends e1 to fl oz to f2 and eg to 3 Step 2 is the easiest 7 we7ve already gured that part out Then as it happens its easier to gure out Step 3 before Step 1 This is a little odd perhaps but the whole advantage of breaking this up into steps is so we can deal with the individual pieces separately Call the transformation in Step 3 F Then here7s what F needs to do F61 fl F62 fg F63 fg Well heck This tells me exactly what F does to the standard basis vectors Which gives me the column of the matrix representation for F 2 1 0 Ff1 f2 f3 7 0 6 23 3 1 72 That takes care of Step 3 Step 1 is a lot easier now lt7s doing the exact opposite of Step 3 which means that the transformation here is the inverse of the transformation in Step 3 lnverting matrices is a subject all its own but l7ll let Matlab handle this for me l7ll enter gtgtF 21070631 2 gtgt format rational gtgt invF ans 310 110 310 85 15 35 720 120 720 Now l7ve got my transformation for Step 1 7310 110 310 F 1 85 715 735 24 720 120 7720 So to get the full transformation T7 T FSF l 25 Its a little strange that F 1 is the farthest on the right7 even though its the rst transformation that7s applied That7s because when I calculate FSF lv for a vector V7 F 1 is the rst matrix that hits V Written another way7 TV F SF IV 26 v Step 1 Step 2 Step 3 3 Matlab code Matlab is really exible when it comes to entering matrices and vectors l7m just going to lay out a way of doing it that7s in line with the concepts l7m teaching If you already know another way of doing it and prefer that7 that7s ne To make the matrix S rst set up the vectors e17 eg and eg gtgt e1 1 0 0 gtgt e2 0 1 0 gtgt e3 0 0 1 Then to make S enter gtgt S 2e1 e2 12e3 Making the matrix F is similar gtgt f1 2 7 3 gtgt f2 1 0 1 gtgt f3 0 6 2 gtgt F 151152 f3 And to get F l7 you can just say gtgt invF Finally7 to compute the product T FSF l7 enter gtgt T FSinvF 4 Next time From here7 were going to start looking at linear systems of equations Those are things like 15233 2 1 741 g g 7 3 0 A This7 alongside the geometry of linear transformations is the other main component of linear algebra Even though we usually tackle these equations looking for a bunch of numbers7 it turns out the geometry we7ve been studying is a big part of nding solutions to linear systems AMATH 352 LECTURE 14 JULY 28 2008 1 New programs There are two new programs along with today7s lecture gausselim2 m and easyj ordanm Actually gausselim2 m is the same thing that comes with Homework 4 For now the main change in gausselim2 m is that you enter A and b separately and they7re returned separately as well To reduce a system to echelon form l7d say gtgt Aech bech gausselim2Ab This program also has a little more exibility that well use later on Meanwhile easyjordanm does the Jordan step of elimination assuming you have a square system with one solution To test it out try this gtgt A rand44 gtgt b rand41 gtgt Aech bech gausselim2Ab gtgt Arech brech easyj ordan Aech bech Each rech stands for reduced echelon form For the types of systems that easyjordanm can be applied to Arech will always be the identity and brech will be the unique solution to Ax b 2 Systems with in nitely many solutions 21 Recap Last time we were checking out a linear system of equations Ax b with in nitely many solutions Here was the original augmented matrix gtgt rand twister 1003 gtgt Ab randomsolvable gtgt aug A b aug 1 25 9 6 19 2 9O 15 15 5 4 20 1 81 12 63 13 29 28 6 19 8 22 2 6 35 5 91 10 7 9 24 53 5 5 And reducing the system7 gtgt aug2 gausselimaug aug2 10000 10000 03333 02667 13333 0 10000 03333 06314 08627 0 0 00000 10000 21658 0 0 00000 0 10000 0 0 00000 0 0 This augmented matrix corresponds to the system of equations 1 l00002 033333 0263637 7 l33335 2 7 033333 063l4m4 0536327 7 4 7 7 5 00667 0 1333 00553 03333 00000 00667z6 01333z6 00553z6 03333z6 0 I JgtHo 754000 08980 717184 43333 0 We discussed that 1 2 4 and x5 are the pivot variables for the system 7 the variables that were used for pivoting at some step in Gaussian elimination The other variables7 3 and x6 are the free variables I can pick any values for the free variables7 and I can still nd a solution to the system 1711 record this breakdown of the variables in two arrays gtgt pivotvars gtgt freevars 1245 36 If I work with the equations from bottom to top7 here7s the sequence of events that 1711 follow Free variables Pivot variables Pick a value for 6 Solve for 5 Solve for 4 Pick a value for 3 Solve for 2 Solve for 1 22 Getting rid of zero rows I obviously dont care about any 0 0 equations and its a little bit irritating having to sort them out manually Fortunately its easy to strip them out of my system Assuming my system is solvable l7ll only have one nonzero row for each of my pivot variables and Gaussian elimination sorts all these to the top I can get the number of pivot variables with length pivotvars which lets me do this nice little trick gtgt Aech Aech1lengthpivotvars pivotvars gtgt bech bech1lengthpivotvars This just says l7m extracting only the rows of Aech and bech that I know are nonzero 23 Finding a particular solution Like we talked about once upon a time the general solution to Ax b comes from this scheme General solution 7 Particular solution Generic vector 1 to Ax b T to Ax b in NullspaceA To get this process started Id like to come up with a particular soltuion Since I can pick any values for 3 and 6 lets say I choose them to make my life as easy as possible l7ll choose 3 6 0 Then the system simpli es to this 1 100002 026674 7 133335 754000 2 7 063144 086275 08980 2 4 7 216585 717184 5 43333 And I can solve this really easily by picking 5 off the last equation substituting it into the third equation to solve for 4 and so on This process is called back substitution 1 could do this by hand but Id like a way to make it happen in Matlab using the code l7ve already written The trick is noticing how the left side of system 2 is related to Aech lts matrix is what I get by keeping only the columns of Aech corresponding to pivot variables For lack of a better word l7ll call these the pivot columns7 l7ve stored the indices of the pivot variables in pivotvars7 which makes it really easy to pull out the columns l7m interested in gtgt Aechpivot Aech pivotvars Aechpivot 10000 10000 02667 13333 0 10000 06314 08627 0 0 10000 21658 0 0 0 10000 Then I can solve the system 2 with gtgt Asolve bsolve easyjordanAechpivot bech Asolve 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 bsolve 36667 20000 76667 43333 The values in bsolve are7 in order7 1 2 4 and x5 But the full solution vector7 X7 needs to have 6 components Again7 there7s a nice trick to set this up using pivotvars gtgt x zeros61 gtgt Xpivotvars bsolve X 36667 20000 0 76667 43333 0 Now7 if you like7 you can check that AX is the same as my original right hand side b 24 Pivot variables lt gt RangeA The method we just used to make a particular solution has a really nice7 if more theoretical interpretation Take a look at the original linear system7 written as a linear combination of column vectors 1 725 9 6 719 72 790 715 715 75 74 20 71 81 12 1 2 3 4 5 6 6 8 22 72 6 735 75 791 710 7 79 24 753 75 5 When I plug in 3 6 07 Pm just throwing out terms 3 and 6 on the left side 1 725 6 719 790 715 715 74 20 81 12 1 2 4 5 8 22 6 735 791 710 7 24 753 5 As I discovered from the reduced system 27 there7s exactly one set of values for 1 2 4 and x5 that make this equation true So the right side b is a unique linear combination of columns 17 27 4 and 5 The same thing happens regardless of the right hand side of the equation 7 just as long as the system is solvable Whenever b is in RangeA7 there is exactly one way to write it as a linear combination of the columns corresponding to 1 2 4 and 5 This tells me that those four columns form a basis for RangeA This is true more generally7 as stated in this shiny box Fact The columns of A corresponding to the pivot variables in any system Ax b form a basis for RangeA 25 Finding NullspaceA with free variables While the pivot variables help me nd a basis for RangeA7 the free variables can help me get a basis for NullspaceA The process is a little trickier7 but its not so bad All l7m really doing is building solutions to Ax 07 following my usual pre scription to solving Ax b with the pivot variables and free variables The trick is to choose the free variables in a way that makes sure these solutions are linearly independent7 and to make my life as easy as possible Pretty much7 I just set one of the free variables to 17 and all others to zero In my example7 this lets me build two vectors Vector 1 Set 3 1 and 6 0 Vector 2 Set 3 0 and 6 1 PH walk through this for Vector 1 You can try it out yourself for vector 2 First comes an observation that will save me some work l7m dealing with the system Ax 0 now7 which means that my augmented matrix is l A l 0 When I reduce this7 l7ll get Aech l 0 l where Aech is the exact same thing I got when reducing A l b This is because all of the steps in Gaussian elimination are determined by A itself The augmented part of the matrix just comes along for the ride So I can skip a few steps7 including the part where I threw away the row of zeros The augmented matrix after reducing Ax 07 which isnt so important to enter in Matlab7 is 1 10000 03333 02667 713333 00667 0 0 1 703333 706314 08627 701333 0 3 0 0 0 1 721658 700553 0 0 0 0 0 1 03333 0 As a linear combination of columns7 this is 1 1 03333 02667 713333 00667 0 0 1 703333 706314 08627 701333 0 0 951 0 2 0 953 1 954 721658 955 700553 956 0 0 0 0 0 1 03333 0 For Vector 17 Pm setting 3 1 and 6 0 So this boils down to 1 1 03333 02667 713333 0 0 1 703333 706314 08627 0 0 951 0 952 0 1 1 1 721658 955 0 0 0 0 0 1 0 To get this in my usual form7 I need to move the one vector that no longer has a variable over to the right side 1 1 02667 713333 03333 H H 0631 08627 l 03333l 0 951 0 952 1 954 721658 9557 0 0 0 0 1 0 The left side is now just the pivot columns7 and the matrix for this is the same Aechpivot that I set up earlier Meanwhile7 the right side is column 3 of Aech7 only negative In Matlab speak7 it7s Aech 3 That means I can solve this system with gtgt Asolve bsolve easyjordanAechpivot Aech3 Asolve 1 O O 1 O 000 COD 0 H000 bsolve O6667 03333 00000 00000 Just like last time7 bsolve holds the values for 1 2 4 and 5 of my solution vector So l7ll say gtgt x zeros61 gtgt Xpivotvars bsolve like last time However7 l have to be a little more careful this time around Last time I did this7 both the free variables were zero Whereas this time7 l7ve set 3 1 I need to enter this manually gtgt X3 1 X O6667 03333 10000 00000 00000 0 To con rm that this is in NullspaceA7 gtgt AX ans 10e14 02002 00252 00208 00446 00584 Even though this is nonzero7 notice the factor of 106 7 14 out front This says that everything here is really srnall7 and zero for all practical purposes Now try the same thing7 but for Vector 2 If you7ve done it right7 you should end up with this X 03333 0 0 06667 03333 10000 AMATH 352 LECTURE 7 JULY 9 2008 1 Linear systems as matrix equations Linear systems of equations are typically quite large To talk about them mathematically we really have to leave the number of equations and the number of variables unspeci ed So lets say there are 7 equations and d variables Then a general linear system looks like this 011 112 113 ai d b1 021 122 123 az d 52 1 071 172 173 an d br The numbers 17 are the coe icients their values are constants speci ed by the problem Meanwhile are the variables Given the values bi our goal is to solve for the values of xi lt7s worth pointing out that each term on the left side of the system looks like aijxj with the second index of 17 being the same as the index of xi This system can be written more compactly in matrix vector notation as 111 112 113 11d b1 2 b 021 022 023 02d 2 963 2 a39rl 172 173 a39rd 39 b39r d A v b X or simply Axb 3 We7ve taken a look at matrix vector multiplication in the context of linear transformations but I havent given it a general de nition yet There are two ways to look at it One is to de ne the matrix multiplication Ax b so that it gives you back the left hand side of the full linear system In that case the 2th element of AX which l7ll denote AX is Ax 2 am 4 k On the other hand I can also de ne matrix vector multiplication in a way that7s much closer to what we did for linear transformations In this perspective the matrix A is just a convenient way of organizing its columns Then the product Ax gives me 011 011 011 aid 021 021 021 02d Ax zl zg 3 M7 5 a39rl a39rl a39rl a39rl which is just a linear combination ofthose columns weighted by the numbers stored in the vector X Its pretty straightforward to check that the matrix equation Ax b is then equivalent to the original system With this perspective A is the matrix representation for a linear transformation that takes a vector x in Rd as input and returns a vector b in R as output So A Rd a K This is pretty similar to the way we looked at matrices when dealing with linear transformations and that7s de nitely not an accident Even if the equations in our linear system didn7t come from any geometric notions a lot of ideas for analyzing these systems are based on the idea that the multiplication Ax applies a linear transformation to the vector x The main idea is that a large vectors space can be broken up into smaller vector spaces called subspaces Looking at these subspaces gives us a way to analyze how large linear systems behave 2 Subspaces 21 Subspaces A subspace is essentially a mini vector space that lives inside of a large a vector space For example consider all vectors of the form b in R3 This 0 lives inside of R3 but it behaves just like R2 We might be inclined to just 0 call it R2 But vectors of the form b behave just the same So do vectors 0 a of the form 0 c The rst thing I want to do is pin down what I mean by rnini vector space This gives the general de nition of a subspace7 which works for subspaces inside any larger vector space While we look at linear systerns7 the larger vector space will always be R for some value of 71 Later in the course7 well look at other types of vector spaces7 and this de nition holds just as well then De nition Let S be a set of vectors in a vector space V S is a subspace of V if it is closed under linear combingtz39ons That is7 if u and V are any two vectors in S7 then au 6V is also in S for any scalars Oz and 6 There are a couple of things worth mentioning about subspaces The rst is that a subspace S always contains in nitely many vectors ln fact7 it contains so many vectors that you cant even write them in a list This is because if V 6 S7 then so is av for any real number 04 and there are too many real numbers to write down in a list As a result7 you have to use more general set notation when you talk about subspaces For instance7 you7d write the subspace consisting of vectors of the a form b like this S b a7bER 6 0 The second point is that a subspace S must always contain the zero vector This is because if V 6 S7 then so is 0V 0 22 An example Consider the set S 112 311111272130 7 You can think of this as all vectors that point to points in the plane 3x y 7 22 0 Id like to show that S is a subspace To do this7 l7ll take two vectors u and V in S Then a generic linear combination of these vectors looks like 04111 501 Wozu v au2 vg 8 04113 613 To show that S is a subspace7 I need to show that W E S To check that7 I just need to verify that the components of W obey the same rule that de nes the vectors in S 31111 LU2 7 21113 3au1 6111 auz 6112 7 2OLU3 6113 9 a3u1 u2 7 2m 63111 112 7 2113 10 04 0 3 0 11 0 12 So W is7 in fact7 in S So S is a subspace of R3 23 Span lfl plot the set S from the previous exarnple7 it looks like a plane Now7 R2 is my model for all things planar7 and I could build that up by taking linear combinations of only two vectors el and eg So there7s a pretty good chance that I can build up S out of just two vectors as well The term for this idea of building up77 a vector space or subspace out of just a few vectors is called span Here7s its de nition De nition Let F f17f27 7fn be a set of vectors in the vector space V Then the spanof F7 denoted spanF7 is the set of all linear combinations of the vectors in F In set notation7 spanFa1f1 cvng anfn 041042 04 E R 13 4 Its also ne to write spanf1f27 7f instead of spanF Lets go back to my example set S l7ll pick two vectors out of it7 just grabbing two of the simplest ones that satisfy 3111 112 7 213 0 0 f1 73 f2 2 14 1 Id like to show that S spanf1f2 To do this7 I need to show two things 1 If I take a linear combination of f1 and f2 then l7rn only going to get back vectors in S Otherwise spanf1f2 is bigger than the space S 2 Any vector in S is a linear combination of f1 and f2 The rst condition is easy to check I know that f1 and f2 are vectors in S since I picked them that way And I already checked that S is a vector space As a result7 any linear combination of f1 and f2 is in S For the second condition7 take any vector V E S My goal is to nd 04 and B so that 11 1 0 Hz a 73 3 2 15 13 0 1 If this is going to work7 then l7d better have 04 01 since f2 is zero in its rst component Likewise7 I need 6 113 This takes care of the rst and third cornponents7 so all that7s left is to make sure that these choices work out alright for the second component On the left hand side I have 112 and on the right hand side7 l have 7311 203 Why should these be equal to each other Well7 since V 6 S7 1 know that 3111 112 7 213 0 I can rearrange this to 02 7311 2113 And that7s exactly what I needed to show So in general7 if V is a vector in S7 then V Ulfl Ugf2 This takes care of the second condition7 and shows that S spanf1f2 The idea of span by itself7 though7 isn7t quite enough Instead of just taking two vectors to characterize S say I take three 1 0 1 f1 73 f2 2 7 f3 71 17 0 1 1 5 Now look at V 719 7 which is another vector in S I can write this in 72 terms of f17 f2 and f3 as 5 1 0 1 719 5 73 72 2 0 71 18 i2 0 1 1 But I also have 5 1 0 1 719 6 73 7 2 7 71 19 i2 0 1 1 Or 5 1 0 1 7 7 3 719 E 73 75 2 5 71 20 i2 0 1 1 I could do this as many ways as I want So clearly7 adding f3 to my list gives me more information than I need If I7rn dealing with planes in R3 I can probably feel my way through this intuitively The trick is guring out how to describe what I7rn seeing here in a way that will work for IR of any size7 when I dont have so much geornetrical intuition The next section addresses the sort of redundant information that I just saw here 24 Linear independence Linear dependence The ideas of linear independence and linear dependence have to do with using the smallest number of vectors possible to build a vector space If a set of vectors is linearly independent7 then it doesnt carry any redundant information 7 for exarnple7 the set f1 f2 from the previous section On the other hand7 a set of vectors that is linearly dependent does carry redundant 6 information7 like the set f17f27f3 Here are the technical de nitions of these terms De nition The set F f17f2fn is linearly dependent if some vector in F is a linear combination of the others This is equivalent to saying that the equation Otlfl O 2f2 0 may be satis ed with the scalars al not all zero De nition The set F f17f27 7f39n is linearly independent if no vector in F is a linear combination of the other vectors This condition is most convenient to deal with as follows The equation Otlfl 04ng 39 39 39 Otnfn 0 is only satis ed when 041 042 an 0 These de nitions come with some touchy points First off7 for linear dependence7 it probably seems sort of weird to deal with the equation Otlfl 04ng 39 39 39 Otnfn 0 rather than the more geometric idea that one vector is a linear combination of the others Mostly7 its to spare the hassle of specifying which vector is a linear combination of the others7 which is all kinds of irritating Second7 the condition for linear independence probably seems a bit weirder still Pretty much7 this amounts to saying F is linearly independent if it is not linearly dependen 77 Third7 if you7re checking for linear independence7 you cant just look at the last vector in the set lt7s totally possible that fn isnt a linear combina tion of the vectors that come before it7 but the set is still linearly dependent For instance7 12 F 001 24 00 Here7s how all of this plays out for my example First7 consider the set f17f27f3 This is linearly dependent because 1 1 0 71732 25 1 0 1 Or if I wrote this more in the vein of 21 1 0 1 0 7327710 26 0 1 1 0 Either way works just as well On the other hand the set f1f2 is linearly independent In this case l7ve only got two vectors and one can only be a linear combination of the other if they7re scalar multiples of each other This reasoning is pretty particular to sets of two vectors though To do this in a more general way l7d set up the equation 1 0 0 73 041 2 0 27 0 1 0 and show that this is only true if 041 042 0 3 Next time The next lecture starts off with basis and dimension which is what these ideas of subspaces span and linear independence are building up to Then we7ll take a look at the null space and the range of a linear transfor mation These are subspaces that are an important part of analyzing what a linear transformation does AMATH 3527 LECTURE 6 JULY 77 2008 1 Introduction to linear systems Linear systems come from an extremely wide variety of topics To get a feel for where the come from and what they look like7 lets take a look at a few examples It turns out that linear systems of equations come in three varieties Those with in nitely many solutions7 those with exactly one solution7 and those with no solution well look at them in that order 11 Example 1 The chemical toluene7 37H87 reacts with nitric acid7 HNOg7 to produce two products One is 37H506N37 and the other is water H20 Offhand7 though7 its not easy to tell how to balance the reaction And if l7m going to mix these things together7 l7d better be really careful to use the right proportions since C7H506N3 is better known as TNT The best I can do to begin with is to write down the chemical equation for this reaction with the number of each molecule undetermined s C7H8 y HNOg a 2 C7H506N3 w H20 1 Now 1 balance the number of C molecules to get 7x 72 2 Balancing the number of H molecules gives me 8zy522w 3 And balancing the N7s and the 07s7 I get y 32 4 3y 62 w 5 Altogether7 l have 4 equations Since each equation only involves adding and scaling the variables7 this is called a linear system It turns out that this particular system has in nitely many solutions7 which well see in just a little bit 12 Example 2 The next example comes from a combined effort to measure the height of my kitchen counter and gravitational acceleration If an object falls from height H at time 1 W H 7 Egtz lt6 The rst thing I do is measure the time it takes for an object falling from my dining room table to hit the ground My dining room table is about 248 feet tall7 and the time I measure is 0403 seconds This gives me one equation7 1 248 i g04032 0 7 Then I measure the time it takes the same object to fall from my kitchen counter to the ground 1711 call the height of my counter H I measure that the object takes 0482 seconds to fall7 so I get the equation 1 H i E90482 0 8 Now 1711 just rewrite these together with the numbers multiplied out 248 7 008129 0 9 H701162g 0 10 This is a linear system of equations for the unknowns H and g This system is easy to solve The rst equation gives me 248 N 30 54 11 9 7 00812 N 39 39 Then I can plug this value into the second equation to get H 01162 3054 m 355 12 So according to this7 my kitchen counter is about 355 feet tall This is a linear system with exactly one solution Later on7 we7ll look at two particular ways of handling these types of systems eigenvalue decompo sitions7 and the QR algorithm 13 Example 3 For this example7 l7rn going to add some information to Example 2 I know that g is supposed to be more like 32 ftsz7 so my measurements probably aren7t all that reliable I can only imagine how good a job this is doing of measuring the height of my counter I drop the same object off my counter again This time7 I measure that the fall takes 0504 seconds This gives me the equation H i g05042 0 13 Cleaning this up and combining it with the previous two equations7 I get 2487008129 0 14 H701162g 0 15 H70127Og 0 16 Its easy to see that the second and third equations contradict each other7 mathematically speaking So this system has no solutions At the same time7 since l7ve taken another measurement7 I should be able to use my new information to get better results Id like to nd some way to pick values of H and 9 that t77 these equations as best as possible The method of least squares provides a way to do this7 and well look at that in a couple weeks 2 Gaussian elimination 21 Solving Example 1 The linear systems in examples 2 and 3 were really easy to solve7 but the system in Example 1 is trickier Now we7re going to take a look at Gaussian elimination7 which is a systematic way of solving any linear system well take things as they come for this example7 and then gure out how to think about this a bit more systematically The system from Example 1 is 72 72 17 8x y 52 7 2w 18 y 32 19 3y 62 w 20 The rst thing I want to do is move everything involving 2 y 27 and w over to the left side 72 7 72 0 82 y 7 52 7 2w 0 y 7 32 0 21 3y 7 62 7 w 0 Then l7ll divide the rst row by 77 so that x has a coef cient of 1 z 7 2 0 82 y 7 52 7 2w 0 y 7 32 0 22 3y 7 62 7 w 0 Next7 l7ll subtract 8 times the rst equation from the second equation to kill off the 2 there 2 7 2 0 y 32 7 2w 0 y 7 32 0 23 3y 7 62 7 w 0 There7s only one 2 left now7 which is the best I can do for the time being Now l7ll focus on y The y in equation 2 already has a coef cient of 1 So now PM just subtract Equation 2 from Equation 37 and PH subtract 3 gtlt Equation 2 from Equation 4 This gives me x 7 2 0 y 32 7 2w 0 7 62 7 2w 0 24 7 152 5w 0 There7s only one y left so now I7ll focus on 2 I711 divide Equation 3 by 76 to give me x 7 z 0 y 32 7 2w 0 z 7 1w 0 25 3 7 152 7 5w 0 Now I7ll add 15 gtlt Equation 3 to Equation 4 I end up with z 7 z 0 y 32 7 2w 0 26 z 7 in 0 0 i 0 This is the most I can do working down Now I7ll work my way back up I7ll ignore the last equation 0 0 Looking at the third equation there7s no way I can get speci c numbers out of it The best I can do is say that 1 z gw 27 Now I plug this into Equation 2 which gives me yw72w0 gt yw 28 Finally I plug 2 w into Equation 1 to get z 7w 29 Altogether I have I 7 7 30 z 3w y w 2 3w The value of w can be anything All that matters is what x y and z are relative to w which is how this system has in nitely many solutions There are a couple of things worth mentioning for this speci c example First were trying to balance the chemical equation 9507H8 yHN03 7 zC7H506N3 wHZO 31 Since s y z and w represent numbers of molecules they need to be positive integers I could solve the linear system with non integer values or negative values though For example w gt y z 32 So the equations don7t necessarily capture everything about the system were interested in Second it makes a lot of sense that this system has in nitely many so lutions The smallest positive integer solution I can get is with w 3 in which case the chemical equation is C7H8 3HN03 a C7H506N3 3H20 33 However I could multiply all of these numbers by 2 3 4 etc and Id still have a perfectly reasonable reaction 22 Gaussian operations There are three types of operations used in Gaussian elimination H Scaling Multiply an equation by a scalar D Pivoting Add a multiple of one equation to another equation This covers subtraction among other things 00 Swapping Switch the order of two equations We didnt have to swap in the last example The aim of swapping is to make sure that you7re always moving left to right along your list of variables and top to bottom along your list of equations All of these operations are really simple things that you7d ordinarily do without thinking about it The aim here is to be really explicit about our intuition and to get this process one step closer to something we can make a computer do 23 One last example Let7s look at one last example just so we see where swapping comes in and so we can keep track of the names of the operations as we go l7ll also start calling my variables 1 2 etc This is kind of necessary for larger systems Plus in the last example there7s the annoying little detail that w comes before 2 and yet when I order the variables l7m sticking it after 2 Anyhow l7ll start with the system 33 9 1 5x2 7 2x3 2 34 11 22 3 Now from here PM just write the operations down as I go Swap Swap Equations 1 and 3 1 22 3 1 52 7 23 2 33 Scale Scale Equation 1 by a factor of 3 1 62 9 1 52 7 23 2 33 9 Pivot Add 71 gtlt Equation 1 to Equation 2 1 62 9 7 2 7 23 77 33 Scale Scale Equation 2 by a factor of 71 1 62 9 33 9 Scale Scale Equation 3 by a factor of 1 62 9 3 3 And now I work my way back up the system of equations This process is called backsolving The last equation give me 33 3 Then I plug this into Equation 2 to get 2 6 7 gt 2 1 Finally7 I plug the value of 2 into Equation 1 to get 1 6 9 gt 1 3 So altogether7 my solution is 137 217 33 3 Coming up 40 41 42 43 I think we7ll put Gaussian elimination on the backburner until I can schedule one of our lectures at the computer lab lt7s kind of dry to do examples on the board7 but potentially fun to gure out how to code That aside7 Gaussian elimination is a purely algebraic way of solving linear systems It doesnt involve a whole lot of thought about whats going on7 and its also not very ef cient In the lectures coming up7 we7ll start looking at linear systems of equa tions in terms of linear transformations This will let us use geometric ideas to solve these systems7 which in many cases leads to more ef cient methods Applied Math 352 Notes by R J LeVeque 1 Introduction 50 Numerical Methods Computers are quite stupid They really don t know how to do much of anything in terms of what is built into the hardware As far as mathematics is concerned they are basically capable only of simple arithmetic addition subtraction multiplication and division Anything else that we want to do mathematically using a computer has to be eventually reduced to a typically very long sequence of these simple operations Remember that everything on a computer is represented as a string of zeros and ones Any data such as numbers photographs music etc are stored as a large set of zeros and ones binary digits or bits for short Computer storage consists of some sort of medium that has a huge number of locations each of which can be set to a 0 value or a 1 value How these are stored depends on what sort of medium it is On a CD or DVD for example there are trillions of individual locations that can each be etched with a laser making a tiny dent representing 1 or not representing 0 Operations such as addition on a computer are performed by electronic components that take two electrical signals each representing a sequence of 0 s and 1 s and producing as output a third electrical signal that is somehow produced from the two inputs If the two input signals represent two numbers expressed in binary notation then it s possible to design an adder that takes these two signals and produces an output signal that is the binary representation of the sum of the two numbers It seems reasonable that such a device can be built since there is a fairly simple algorithm for combining the bits in the summands to get the bits in the sum corresponding bits are added using the rules 0 0 O 0 1 1 and 1 1 0 with a 1 carried to the next place With a bit more work hardware implementing multiplication can also be designed Any other mathematical operations that we want to perform on a computer must somehow be reduced to these operations Consider the simple process of computing a square root for example You may think a computer or even the simplest calculator knows how to do this directly since there is a square root button on most calculators and every computer language has some built in function with a name like sqrt that returns a square root But think about it how would you design an electrical circuit that takes the bits representing a number and massages them directly to get the bits representing its square root There is no simple bit by bit algorithm the way there is for addition or multiplication When you press the square root button on your calculator or use sqrt in a computer program you are actually invoking some other computer program the reduces the problem of computing a square root to a sequence of simple arithmetic operations additions multiplica tions that are then done by the hardware There are many ways that this might be done and we ll look at a couple in a moment This is the sort of problem that we will be studying in this course how do we take compli cated mathematical problems and reduce them ultimately to a series of arithmetic operations that a binary computer can perform in a nite amount of time that will produce some reason able approximation to the true solution of the problem Of course we will mostly look at more challenging problems than computing the square root and when we write programs we will freely use built in functions such as sqrt and hence build upon algorithms that are already available but it s useful to consider the square root problem as an illustration of some of the issues that arise when trying to develop an approximation algorithm 11 Computing a square root How would you compute 19 for example if you were only allowed to use 7 9 Here are two possible approaches First approach We know that 42 16 and 52 25 so xE must be between 4 and 5 Let s try squaring the midpoint of this interval 45 We nd that 452 2025 so this is too big Hence xE must be between 4 and 45 Again try the midpoint 4252 180625 This is too small so xE must lie between 425 and 45 Next we can try the midpoint of this interval and nd that 43752 191406 so this is too big And so on This is rather tedious but if we keep doing this long enough we will get down to a very small interval that contains xE This is an iterative method We have some way of taking an approximation an interval containing xE in this case and improving it at the next iteration to get a smaller interval If we repeat this process many more times and print out the midpoint of the interval we have in each iteration as a single number approximating xE we would nd the following iteration mid oint O 4500000000000 1 4250000000000 2 4375000000000 3 4312500000000 4 4343750000000 5 4359375000000 6 4351562500000 7 4355468750000 8 4357421875000 9 4358398437500 10 4358886718750 11 4359130859375 12 4359008789062 13 4358947753906 14 4358917236328 15 4358901977539 16 4358894348145 17 4358898162842 18 4358900070190 19 4358899116516 20 4358898639679 For comparison the correct value of V19 is 4 358898943540 Our iterative method is converging to the correct value though somewhat slowly Notice an important point about this iterative method we only used addition multiplica tion and division computing the midpoint of each interval 11 requires computing a b2 and then squaring this value can be done by multiplying by itself So we have achieved what we desired an algorithm that can compute the square root of a number using only the basic arithmetic operations This algorithm is called the bisection algon39thm and we will study it further and generalize it to solve much more complicated equations than 2 19 later Second approach Here s another way to approximate Notice that 19z f ie dividing 19 by the desired value f gives f back again Suppose on the other hand that we divide 19 by something larger then f The result will be smaller than f Conversely if we divide 19 by something smaller than z the result will be larger than f This suggests the following algorithm Take any number 0 to start some approximation to f if we have one eg mo 4 Then mo and 19z0 will bracket the desired value f De ne ml to be the average of these two values ml z0 19m0 Now repeat this process in the next iteration setting 2 z1 19m1 And so on It s not so clear that this process will converge but in fact it does and amazingly quickly Here s what we get in a few iterations if we start with 0 4 x0 400000000000000 19x0 475000000000000 x1 437500000000000 19x1 434285714285714 x2 435892857142857 19x2 435886931585416 x3 435889894364136 19x3 435889894343998 x4 435889894354067 19x4 435889894354067 x5 435889894354067 19x5 435889894354067 All of the digits shown are correct after 4 iterations This algorithm also only requires addition and division It is much faster than the bisection algorithm and also easier to program if you wanted to implement it This is a special case of Newton s method for solving a nonlinear equation m2 7 19 0 in this case and we will study this method more generally later too There are many issues related to these methods that will be addressed such as how do we get them started Do they converge from any starting values or only from certain subset of possible starting values When do we stop iterating Can we bound the error somehow so we know how good an approximation we have How fast do they converge and can we prove for example that Newton s method always converges faster than bisection 12 Some more challenging problems Listed below are a few mathematical problems that cannot be solved exactly by analytical means but for which we can compute very accurate approximations to the exact solution using numerical methods that we will study in this class One of the goals this quarter is to become pro cient with the MATLAB programming lan guage This language has built in functions that solve many of these problems and some of these will also be introduced in this section MATLAB will be introduced more systematically in the next set of notes but to understand most of the examples below you only need to know a few things D Figure 11 The de nite integral 11 interpreted as an area 0 gtgt is the MATLAB prompt so anything following this represents something you can type into MATLAB at the prompt o A simple function fz can be speci ed using the syntax gtgt f x 3 sinx for example This de nes fz 3x sinz This function can then be evaluated at a point or passed into another function as will be done in the examples below 13 Computing a de nite integral Suppose we need to compute 17 cosE logmdm 11 In calculus you learned how to compute de nite integrals by nding an anti derivative of the integrand and evaluating it at the endpoints of the interval However this only works if you can nd an anti derivative The function cosE logz does not have an anti derivative that can can be written down in terms of standard elementary functions so this approach fails However the de nite integral 11 does have a well de ned value It is the area under the curve y cos logz for z between 9 and 17 This is sketched in Figure 11 The area of the shaded region is the integral and the desired value is approximately 17 coswi logm d5 6952058905443586 12 9 How might we approximate this One approach is to split the interval 917 into pieces of length h and construct a set of rectangles as shown in Figure 12a whose base has length h and whose height is given by evaluating fz at one edge of the piece The sum of the areas of all these rectangles will approximate the area under the curve In Figure 12a only four rectangles have been used and the approximation will not be very good but by using smaller h and many more rectangles an arbitrarily good approximation can be computed 7 in fact you probably saw the de nite integral de ned in calculus as the limit as h a 0 of this approximation Figure 12 a The de nite integral 11 approximated by the areas of 4 rectangles b The de nite integral 11 approximated by the areas of 4 trapezoids Making h smaller using more rectangles requires more work to compute the area of each rectangle we need to evaluate fx at one point and so for every rectangle we require one function evaluation of the integrand For some problems it can be quite expensive ie require quite a bit of computer time to evaluate f at each point So we would like to develop methods that compute more accurate approximations with fewer function evaluations Figure 12b shows a better approximation obtained by approximating the area by a set of trapezoids instead of rectangles The number of function evaluations is nearly the same one more at the right end of the interval but the approximation obtained is typically much better For the very coarse approximations shown in Figure 12 the rectangle and trapezoid approximations give rectangle 6516622749765546e00 trapezoid 6832431699945039e00 Increasing the number of intervals improves the approximations For example if we subdivide 9 17 into 1000 subintervals instead of only 4 the rectangle and trapezoid approximations are rectangle 6950792491769874e00 trapezoid 6952056992070894e00 ln MATLAB there are a number of different built in functions to approximate the values of integrals by numerical methods This procedure is often called quadrature motivated by the idea of replacing the area by a set of quadrilaterals such as rectangles or trapezoids and the MATLAB routines have names like quad quadl quadv etc These implement various more sophisticated approaches to estimating de nite integrals and we ll look more closely at them later They are very simple to use gtgt f x cossqrtx logx gtgt integral quadf 9 17 integral 69521 The last two lines are the output MATLAB produces By default MATLAB only prints out a few digits of the value it has computed and stored In this class we will often want to examine more digits and we can do this by setting the format appropriately in MATLAB gtgt format long e gtgt integral integral 6952058938605557e00 Comparing this with the better approximation 12 we see that about 8 digits have been correctly computed Moreover it s possible to have quad report how many function evaluations it performed to obtain this estimate and for this problem it evaluated the function only 21 times to obtain this estimate which is better than either the rectangle or trapezoid approximations achieved with 1000 function evaluations Later we will study the type of method used in quad 14 Solving a nonlinear equation We ve already seen one example of this solving the equation 2 7 19 0 for the square root of 19 Here s another example Find z for which sinm expm or equivalently nd an x for which fz sinm 7 expz has the value 0 This is called a zero or a root of the function Roots can be computed using a variation of the bisection algorithm or Newton s method used above for the square root procedure In MATLAB this can be solved using the built in function f zero gtgt f x sinx expx gtgt x fzerof 0 X 3 18306301 1933363e00 We can check that this is nearly a zero of the function by evaluating f at this point gtgt f x ans 2 151057110211241e16 The value is about 72 x 10 16 very close to zero Note that the function fzero requires an initial guess for the solution as well as the function f we are applying it to Here we have used initial guess mo 0 Choosing a different guess can lead to a different solution in cases where the function has more than one zero as here where the equation has in nitely many solutions 15 Solving a linear system of equations Suppose we want to nd m1 m2 and 3 so that 31 42 3 21 2 53 75 13 71 82 7 23 7 This is a linear system of 3 equations in 3 unknowns If you have studied linear algebra before you know that this system can be written in matrix vector form as Am b where 3 4 0 m1 3 A 2 1 5 x m2 b 75 14 7 8 72 m3 7 and that the solution can be computed exactly by performing Gaussian elimination on this system Doing so you should nd that 1 71315 2 75 3 This problem is easily solved in MATLAB by using the backslash operator gtgt A 3 4 0 2 1 5 7 8 2 A 3 4 0 2 1 5 7 8 2 gtgtb 3 5 7 b 3 5 7 gtgt x Ab X 8666666666666666e01 1400000000000000e00 9333333333333335e01 This is a good approximation to the true solution The errors in the 16th digit are due to the fact that the computer uses nite precision arithmetic each number is stored using only a nite number of bits and there is no way to exactly store numbers like 71315 or 1415 For such a small system of equations Gaussian elimination can be done by hand and the exact solution obtained However in many practical problems it is necessary to solve linear systems of equations with thousands or millions of equations or unknowns and then it is necessary to use a computer implementation of Gaussian elimination Solving such a system requires doing billions or trillions of arithmetic operations and one might worry about several things such as o How long will this take on a computer Will I live long enough to see the answer 0 Since computers store only approximations of real numbers and computer arithmetic is not exact do the errors made in the process of doing Gaussian elimination accumulate to give such a large error that the result obtained is meaningless We will consider both these questions and others in detail later 16 Finding the eigenvalues of a matrix If you don t know what eigenvalues are don t worry 7 we will learn what they are and how they are used in various contexts A 3 x 3 matrix like the matrix A in 14 has 3 eigenvalues In general these can not be found exactly but there are many approaches to approximating them numerically In MATLAB we can nd good approximations using the eig function gtgt lambda eigA lambda 8582575694955835e00 5825756949558420e01 5999999999999998e00 17 Solving differential equations If you have taken a course on ordinary differential equations like Math 307 or AMath 351 then you learned many techniques for solving particular differential equations analytically Already in Calculus you should have seen some differential equations and know for example that the solution to 205 3yt7 MO 2 is yt 255 Many differential equations that arise in practice cannot be solved exactly Instead we can compute approximations to the solution at a large set of points using a numerical method For example the differential equation 275 60802905 1 Wt 240 2 15 cannot be solved exactly but we can produce a plot of an approximate solution over the interval 0 g t g 4 for example by plotODEsolutionm f ty costy sqrt1y y0 2 ODEsolution ode23f0 4y0 t linspace0 4 100 y devalODEsolution t plotty This gives the plot shown in Figure 13 These commands could all be typed in at the prompt However for things like this is it much better to create an m le script This is simply a text le with extension m that contains the commands to be executed Typing the name of this le without the m at the prompt causes this sequence of commands to be executed So if the above commands were in the le plotODEsolutionm then typing gtgt plotODEsolution Figure 13 Approximate solution to the differential equation 15 would result in the plot shown in the gure There are several advantages to using m les If you make a mistake in one line you can easily correct it and reexecute the whole sequence you can modify it later to do something slightly different you can print it out and turn in with your homework In the script given above and for later examples in the notes if the rst line is a comment of the form 70 f ilename m then the script with this name can be found on the class webpage in the mfiles directory You can download it and try it out or modify it 18 Why study numerical methods We will study numerical methods for solving these problems and several others in detail in this course You may wonder why we should study them since MATLAB has built in routines to solve all these problems There are several reasons 0 Numerical methods such as those implemented in the MATLAB functions used above do not always give the right answer or even anything close and sometimes they don t produce any sort of error message but just give a plausible but wrong answer It s important to understand this and to learn something about the methods used and when they can be trusted and when to expect dif culties We will see many examples of this as we go along and attempt to analyze them 0 Numerical methods can often be analyzed mathematically and this analysis used to derive error bounds on how accurate an approximate solution is This is often an important 179 step in gaining trust in something we have computed So we will not only learn how to derive and implement numerical methods we will also learn how to analyze them and estimate the error in the results they produce For more complicated problems than the ones considered in this section it is often nec essary to develop a new numerical method or at least adapt a well known method to a new context In order to do this effectively it is necessary to know a lot about existing methods for standard problems 0 Finally it s lots of fun to invent analyze and test new methods New numerical methods are constantly being invented and analyzed there are dozens of technical journals and thousands of books devoted to this topic Numerical methods are invented and studied by mathematicians who want to analyze the properties of methods prove they converge provide error estimates etc and by scientists and engineers who want to solve real world problems in many different elds Historically there have been two main sides to science theory and experiments Recently a third major discipline has been added to this computational science in which computations are used to test out theories and compare to experiments To do so the theory is usually expressed in the form of mathematical equations that usually cannot be solved exactly The rapid increase in computing power and speed has made it possible to do amazing computations that were inconceivable even a few years ago But computers are still as stupid as ever and can only do basic arithmetic incredibly quickly Numerical methods for solving more complicated mathematical problems lie at the heart of this exciting development AMATH 3527 LECTURE 18 AUGUST 67 2008 1 Recap of orthonormalization A while back7 we looked at the process of orthonormalization lf V17 V27 Vn is any set of linearly independent vectors7 then orthonormalization converts them to an orthonormal set of vectors q17q2 7qn that spans the same space The set q bqg7 7qn being orthonormal means two things 1 The q7s are pairwise orthogonal That is7 qiqj0 former 1 2 Each of the q7s is normalized to length 1 That is7 HqZH1 forz3912n 2 The process of orthonormalization involves two steps Orthogonalization7 and normalization The precise details will show up in our Matlab code a little later on7 so for now 1711 just write down the simple version 1 Normalize V1 to length 1 to obtain ql 2 Fork23n a Orthogonalize vk to q17q27qk1 b Normalize the outcome of the last step to length 1 to obtain qk The intuitive idea behind this process is to simplify V17 V27 Vn down to a set of distinct directions Here distinct7 means orthogonal Meanwhile7 direction7 is in the sense that a vector has a magnitude and a direction but once the length is scaled to 17 the only interesting part left is the direction 2 QB factorization De nition and use The goal ofthe QR factorization is to split a matrix A into two factors which as you might have guessed are called Q and R A Q R 3 orthonormal upper columns triangular The columns of Q are obtained by orthonormalizing the columns of A Mean while B essentially bookkeeps the steps necessary to rebuild A using the columns of Q The name R comes from the fact that an upper triangular matrix is sometimes called right triangular lts shape is the exact same as the matrix U that we saw in the LU factorization 21 Why bother Our goal is to solve linear systems of the form Ax b 4 So for this factorization to be worth our while the system QRx b 5 should be easier to solve The trick comes from a particularly awesome fact about about matrices with orthonormal columns Namely Q Q I where Q is the transpose of Q obtained by ipping its columns to rows The notation QT is more typical but l7m going with Q to match Matlab Take this on faith for the time being 7 well look at it in more depth in the next lecture when we deal with least squares problems With this trick in hand l7ll multiply 5 on the left by Q Q QRX ab 6 Since Q Q I the left side simpli es quite nicely and l7m left with Rx Q b 7 From here R is upper triangular So this system can be solved with back substitution which is a bunch of straightforward arithmetic 22 An example of QC I Anybody in their right mind should be skeptical of this Q Q I business lt7s convenient Almost too convenient As it happens7 the easiest way in Matlab to get a matrix Q with orthonor mal columns is to QR factorize something So use these commands gtgt rand twister 2002 gtgt A rand66 gtgt ELK qrA gtgt Q Q 05308 07231 01257 04141 00811 00401 02411 02090 08467 02845 02454 02005 04248 04860 02440 03657 05871 02129 04172 03751 01229 00588 03613 07322 05467 00183 00338 07215 00947 04124 00824 02372 04376 02998 06701 04545 You might not know it by looking at it7 but this matrix has orthonormal columns And gtgt Q7gtkQ ans 0H 00 00 00 00 H0 00 00 00 00 00 000 000 000 000 000 000 000 000 000 H0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 00000 00000 00000 00000 00000 00000 00000 00000 10000 Everything on the diagonal is 1 or close enough And everything off the diagonal is really tiny 7 approximately 0 Up to rounding error7 this is the identity matrix 3 Side note Dot products in Matlab Matlab doesnt have a built in function dedicated to dot products However7 dot products are still really easy to do If u and v are two column vectors of the same length7 then u v is their dot product So 11 gets transposed7 and after that its just standard matrix multiplication While working with the QR factorization7 the column vectors were in terested in are the columns of Q We can use the dot product to check the orthonormality of the columns For example7 gtgt Q1 Q1 ans 10000 indicates that the rst column has length 1 This calculation actually gives its length squared7 but if the length squared is 17 then so is the length Meanwhile7 gtgt Q1 Q5 ans 27756e 17 indicates that Columns 1 and 5 are orthogonal7 at least up to rounding error 4 QB factorization How to nd it Like the LU factorization7 the QR factorization is probably easiest to think of as modifying the columns of A in sequence The matrix Q holds these columns as they are modi ed to become orthonormal Meanwhile7 the matrix R holds the information for how to reconstruct A using the columns of Q7 by means of the product QR Here l7m going to walk through the steps of making an m le to per form a QR factorization of a particular matrix The nished product is QRplaygroundm7 posted alongside this lecture Start off with the following code Set up the matrix A rand twister 77777 A 5rand44 oo Initialize Q and R Q A R eye4 00 Edit Q and R here Z Print ClC Q R QR QR A Everything were going to do from here on goes in the 00 Edit Q and R here section Updating the columns of Q is just a matter of performing Gramm Schmidt orthonormalization on its columns Starting off with Column 17 there7s no orthogonalization to do since there are no previous vectors in the list So all that happens is the normalization 00 Column 1 0o Normalize R11 normQ 1 QI1 QI1 R11 The norm command is a shorter way of computing the length of a vector than writing out sqrtQ 1 7 QZ1 Next comes Column 2 First7 this has to be orthogonalized to Column 1 When we talked about Gramm Schmidt7 orthogonalizing V2 to ql went like this E V2 011 39V20h 8 The vector E on the left is the orthogonalized part of V2 Rather than store E explicitly7 well just overwrite the column of Q that were working on7 namely Q2 00 Column 2 0o Urthogonalize to the first column R12 Q1 Q2 Q2 Q2 Q 1R12 Now Column 2 is normalized oo Normalization R22 normQ 2 Q2 Q2 R22 Next comes Column 3 First7 this is orthogonalized to Columns 1 and 2 With the LU decomposition I had to be sure that I made modi cations with respect to Column 1 before I did anything with Column 2 Conveniently7 I can do them both at once with the QR factorization 00 Column 3 0o Urthogonalize to Columns 1 and 2 R13 Q1 Q3 R23 Q2 Q3 Q3 Q3 Q 1R13 Q2R23 And now Column 3 is normalized oo Normalize R33 normQ 3 Q3 Q3 R33 Finally7 I deal with Column 4 in the same way as Column 3 Column 4 Urthogonalize to Columns 1 2 and 3 R14 Q1 Q4 R24 Q2 Q4 R34 Q3 Q4 Q4 Q4 Q1R14 Q2R24 Q3R34 oo Normaliz R44 normQ4 Q4 Q4 R44 XX D 41 Comments Keeping track of whats going on with Q is probably easier than R since a full column of Q gets modi ed all at once There are a few tricks to keeping track of the elements of R though 0 The element Rij always re ects how much Column 1 of Q con tributes to Column j of the product QR For example R14 indicates how much Column 1 of Q contributes to Column 4 of the product The diagonal elements of R correspond to rescaling of the columns of Q For example when Column 2 is divided by its own length that length is recorded in R22 For the off diagonal elements of R the indices correspond to the vectors that show up in the dot product For example R24 QI2 QI4 The order of the vectors on the right side can be switched without changing the answer but we might as well leave them in the order that the indices appear on the left 42 Coding shortcuts If you were writing a program to compute a QR factorization you7d want to make some of the longer calculations more systematic For example lets take a look at the orthogonalization of Column 4 00 Column 4 0o Urthogonalize to Columns 1 2 and 3 R14 Q1 Q R24 Q2 Q R34 Q3 Q4 Q4 Q4 Q1R14 Q2R24 Q3R34 Typically you7d probably do something like this with a for loop 4 4 for i13 Ri4 QIi QI4 Q4 Q4 QiRi4 end This orthogonalizes Column 4 to the rst three columns in sequence Using Matlab7 there7s another alternative that dodges the for loop ln fact7 if you do serious coding in Matlab7 you really want to do this I can do all of my calculations using matrix operations7 thanks to the operator R134 Q13 Q4 Q4 Q4 Q13R134 It can be a little tricky to see how these operations are equivalent to the original piece of code It might be worth working through this in a bit more detail to see why You can also think of this with the mnemonic that 13 shows up whenever 17 27 and 3 show up in sequence in the original code You have to be a little careful with this7 but it works surprisingly well AMATH 352 LECTURE 22 AUGUST 15 2008 1 Overview In this lecture well take a look at using polynomial interpolation to approx imate derivatives of a function Say that f is the function l7m looking at and x0 is the point where Id like to approximate a derivative For rst order derivatives the simplest thing I can do to approximate f x0 comes from the de nition of a derivative We 1333 f h m0 1 This suggests that if h is small enough I should have me f9 0 h 0 lt2 However this approach is a bit limiting The most obvious problem is that I cant use this same method to approximate f x0 at least without being pretty sneaky To approximate f x0 this way l7d need to know f which isnt necessarily the case lnterpolating polynomials give me a way to get past this Say l7m inter ested in approximating f x0 but only using the values of fx itself Then I can pick several interpolating points near 0 and construct an interpolat ing polynomial Once this is done I can approximate f x0 p x0 where p x0 is something I can nd explicitly since polynomials are easy to differentiate For this method to make sense p needs to be high enough degree that p 0 is not identically zero So the smallest degree I can get by with is 2 As such l7ll use a quadratic polynomial As an added bonus l7ll get another way to approximate f z0 which turns out to be better than the one I listed above 2 Quadratic interpolation For sake ofease l7ll choose my three interpolation points to be equally spaced 0 7 h 0 and 0 h Conveniently this turns out to be the best way I could choose three points too To form the interpolating polynomial px l7ll use Newton form It turns out that its most convenient algebraically to order the points as 07h x0h and x0 l7d get the same answer either way but there7s less simpli cation to do in this case This means p takes the form p a1 x 7 0 7 h 12 x 7 0 7 h x 7 0 h 13 Setting p0 7 h f0 7 h gives me 11 0 71 Next I set px0 h f0 h to get 11 0 h 7 0 7 ha2 f0 h Plugging in for al and simplifying gives me i f0h f0 h 2h 39 Finally I set px0 n to get 11 0 0 hl12 0 0 hll0 0 hl f0 I can cancel out a bunch of the x07s which gives me 11 hag 7 7203 Then I plug in for al and 12 to get f0h mi1 foo 7 h h 7 hzag 7 mo 2h After a little more algebra f0h 2f0f0h 212 39 3 9 10 This completely determines I could write it out in full but that ends up being pretty long and doesnt do me a lot of good remember its general form and the values of a1 a2 and a3 2 Instead PM just 3 Differentiation Now l7ll compute the rst two derivatives of px which will let me approx imate f x0 and f z0 The values of a1 a2 and a3 don7t actually affect the differentiation process so PM wait until the end to plug them in I start with p a1 x 7 0 7 h 12 x 7 0 7 h x 7 0 h 13 11 Taking one derivative ps a2 x 7 0 7 h z 7 0 ha3 12 a2 2 7 zoa3 13 And taking one more p z 2 14 Now I can get the approximations l7m after First H950 101950 12 15 we W0 23119 h lt16 And second f 0 p x0 2 17 fO 0 h i 20 0 h 18 Its worth mentioning that neither of the approximations 16 and 18 de pend explicitly on The interpolating polynomial is purely an interme diate step which is invisible at the end of the process 4 Matlab code The two approximation formulas 16 and 18 are really simple to use This code will do the trick 00 Pick a fuction f x sinx fp X cosX fpp X sinX 00 Pick a point X0 1 00 Set a value for h h 2 00 Approximate 1quot X0 and f 7 X0 f1 fX0h fXOh 2h f fx0h 2fX0 fX0h h 2 M 00 Calculate the errors errl absfpX0 f1 err2 absfppX0 f2 The last two lines tell me how far my approximations are from the actual values of f x0 and f z0 As is7 the approximations are decent If you reduce the size of h7 even by just a factor of 27 they get better a lot faster Another fun thing to do is use this to make full plots of f and f In the code above7 X0 can actually be a full set of values without changing the calculations This code will plot our approximation of f at 100 values from 7W to 7139 as a solid line7 while plotting the actual f as a dashed line Note that h is pretty large this time If you reduce it7 the two graphs become identical to the eye very quickly 00 Pick a fuction f X sinX fp X cosX fpp X sinX 00 Pick a point X0 linspace pipi100 00 Set a value for h h 8 00 Approximate 1quot X0 and f 7 X0 f1 fXOh fXOh 2h f2 fXOh 2fX0 fXOh hAQ Z Plot the approximate and actual f x plotXO f1 hold on plotXO fpXO hold off 5 Accuracy Eyeballing graphs is all well and good7 but in practice7 its better to have a quantitative way to measure how good these approximations are The key to this is the Taylor series The Taylor series for x centered at 0 is we mo f ltz0gtltz 7 x0 grow 7 a gmw e goo lfl use this to calculate f0 h7 I get mo h mo ltzogth grow fmltzogth3 0014 lt19 5 The term l7ve marked Oh4 encompasses the terms involving I14 I15 I16 and so on While the O notation can be given a precise de nition just think of it to mean that the part labeled Oh4 behaves like77 I14 as h a 0 After all when h is small the I14 term will be much larger than the I15 term and all higher powers of h In a similar fashion I can use the Taylor series to calculate f0 7 h 1 1 fo h fo f oh 5f 9 oh2 gf oh3 0014 20 Note that this is the same as the series for f0 h but with h replaced by 7h This changes the sign on every odd power of h Now l7ll use these two expansions to gauge the accuracy of the formula flt0h7fx0 h39 950 x 2h First in terms of my Taylor expansions 2 0 h 0 h 2INC0 8fm0h3 0h4 22 To isolate the f x0 on the right I divide by 2h to get f0hf0h 2h f m f x0h2 0013 2 23 If h is small then the rst term in the error will dominate So l7d expect that 1 error 8fmz0h2 24 In practice though l7d probably just say that error 0h2 25 So the error in this approximation behaves like77 I12 as h a 0 This says that if I reduce h by a factor of 2 I711 expect the error to reduce by a factor of 4 By contrast consider the approximation for f z0 based on the de nition of the derivative fgco 26 If I do this same type of analysis I get flt0 l 7421 f o f o l f o lz 0013 27 error So in this case error Oh 28 This says that if I reduce h by a factor of 2 the error in this approximation also should decrease by a factor of 2 This isnt nearly as good as the previous approximation particularly given that the two require the same amount of effort to evaluate In a similar fashion I can analyze the accuracy of my approximation to f quot960 f0h 2f0f0h f 0 29 First by comparing Taylor expansions 0 h 2f0 0 h f 0h2 0h4 30 So when I divide by I12 I get simply f0h 2f0f0h hz If I had kept another term in the Taylor expansion then Id be able to see the leading term in the error which depends on f4z0 This would give me a more precise error estimate However 0h2 is nice in and of itself f 96o 0012 31 6 Error estimates in action Go back to the rst piece of code which computed the errors in the derivative approximations for f sinz at 0 1 If you run this for h02 and hO1 in succession the errors in approximation for f x0 are errl 00036 errl 90005e 04 Since h was reduced by a factor of 27 the error should have dropped by a factor of 4 And yes indeed7 gtgt 90005e4 0036 ans 02500 The factor of 4 estimate is pretty dead on I think its unusually good for sinz7 but its still impressive how well it works for other functions If you run this once more with h057 you7ll get errl 22510e 04 And checking how much the error has dropped this time gtgt 22510e4 90005e4 ans 02501 Once again7 almost exactly a factor of 4 You7ll get similar results if you run these same tests on err27 since the approximation for f x0 is also accurate up to 0h2 7 A bit of freak chance If you were in lecture on Friday7 then you saw my rst example of errors in Matlab crash and burn At the tirne7 l was looking at the error in approxi rnating f x0 with f 1 1M and 0 1 In this case7 halving h ended up multiplying the error by roughly 06257 or 116 Here7s why Earlier on I found that fxoh fWor 71 2h me gmw 0013 lt32 v error Now7 if you hack through the algebra for fx 143mm you7ll nd that m 24x 7 3 f z 7 1 W lt33 So with my choice of 0 17 l7ve got f 0 0 This means that the leading term in my error estimate disappears7 and the next largest term takes over As it happens7 there7s not actually an I13 term for this particular approx imation So whats left is the It term That means that at this single point 0 17 the error is 0h4 Then reducing h by a factor of 2 reduces the error by a factor of 16 If you change 0 to a different point other than 0 and 71 then the error is back to 0h2 AMATH 3527 LECTURE 11 JULY 187 2008 1 Introduction An orthornorrnal set of vectors is one in which the vectors are all perpendic ular to each other7 and they each have length one More technically7 here7s the de nition De nition A set of vectors q bqg7 7qn is orthonormal if the vectors are 1 Orthogonal to one another So ql qj 0 whenever 239 7 j 1 2 Normalized to length 1 So 1 for all 239 2 If I go grab any old set of linearly independent vectors f17f277fn7 the chances are pretty slirn that its orthonormal However7 its possible to orthogonalize this set Orthogonalization is a process by which I replace f17f27 7fn with an orthornomal set qL7 qg 7qn without changing the span So if f1 f2 7fn is the basis of some particular subspace like the range of a linear transfor mation7 then q17q2 7qn is a basis for the same space Before digging too deep into this7 though7 Id like to spell out why or thonormalizing is worth the effort 2 Why bother orthonormalizing 21 Conceptually The intuitive idea behind orthogonalizing is that l7m boiling the set f17 f2 7fn into a sequence of distinct directions The orthogonality of the q7s indicates that their directions are completely different7 and normalizing them to length 1 removes any information about scaling7 so all that7s left is the direction it self lf there7s only one vector f1 involved7 this is just a matter of scaling it to have length 1 This is whats happening in the left panel of the gure below f2 New direction f1 C12 Direction of q1 The right panel is what happens if there7s a second vector f2 involved I can construct f2 in two steps 1 Move a certain distance in the direction of ql 2 Move a certain distance in a direction orthogonal to ql When i take the second step of this process and scale it down to length 17 17m left with a vector 12 that describes the only new direction introduced by f2 22 Decomposing a vector into orthonormal pieces Say that f17f27 7f39n is a basis for a subspace S if b is a vector in S then its possible to decompose b in terms of these basis vectors b041f1a2f2Otnfn 3 However7 the scalars 041042 04 are really hard to compute ln general7 you have to solve a full linear system to nd them Now say that q17q2 7qn is a basis for the same subspace I can also decompose b in terms of these b 51011 52012 quot398nqn 4 Only these new scalars 6162 6 are tons easier to nd If I take the dot product of both sides with a single qk for any k 127 7n7 I get b Qk310h QkBZQZ39Qk nQn39Qk 5 However7 since the q7s are orthogonal to one another7 every term on the right is zero except for qk qk So 5 simpli es to b 39 Clk Bka 39 Clk 6 Now I use the fact that qk is normalized to length 1 This tells me that qk qk llqu2 1 So 6 simpli es even further to b39OIk k 7 This says that to nd the scalar Bk in the decomposition 47 all I need to do is evaluate b qkl That7s about as simple as it gets 23 Easy linear systems Suppose that Q is a matrix with orthonormal vectors as columns7 Qq1 012 qn 8 Say that I m trying to solve a linear system involving Q7 QX b 9 lfl break down the left side of this equation a little bit7 I get 1 q1 012 qn 962 b 10 x 011 0122 0Mnb 11 This is the exact same thing as the decomposition formula 4 So to solve for each x all I do is take xkbqk fork12n 12 This is a lot easier than solving a typical linear system While its pretty rare to end up with a system like this in practice7 its still possible to exploit this trick to solve more general systems This is the idea behind the QR algorithm7 which we7ll get to a little later on 3 24 How close is b to a subspace S There7s one last big reason behind orthonorrnalizing a set of vectors Aside from answering a fairly intuitive geometric question7 its also the main idea behind least squares problerns7 which are used to t curves to data Say that S is a subspace of R of some size Let b be a vector that7s near S7 but not quite in S I might want to say how close b is to S In low dirnensions7 this amounts to asking how close a point is to a given line or a given plane lt7d be nice to have an answer that works in higher dimensions7 too lf 1 have an orthonormal basis for S 7 which l7d usually get by orthonor rnalizing a basis that isnt so nice 7 this problem is pretty easy Say that q17q2 7qn is my orthonormal basis I can break b up into its components along the q7s7 plus one additional part E that7s perpendicular to everything in S b 51Q1B2QZquot39 nQn E 13 The E is for error7 in the sense that 17m trying to get as close to b as I can while staying within S I dont know what the 67s are offhand7 but they7re not so hard to nd Since E is orthogonal to everything in S7 it7s orthogonal to all the q7s in particular This means that E qk 0 for every k which lets me use the same trick I did before V b39OIk 5101139Qq52h39quot39 nqn E39Qk 14 515 Clk 1 Bk 1 Once l7ve found all the 67s I can get E by subtracting from b all the parts that lie along the q7s Eb lq1 62q2 nqn 17 CH This actually tells me two things 0 lql zqg nqn is the closest I can get to b while staying within S o measures how far b is from the subspace S 3 GrammSchmidt Orthonormalization Grarnrn Schrnidt orthonorrnalization is a procedure where you start with a linearly independent set of vectors f1L7 f2 7fn You work your way left to right through this set in n steps7 so that after the kth step you have the set 011701277Qk7 fk177fn39 18 orthonormalized unchanged Each step except the rst has three parts 1 Orthogonalize to all the previous vectors 2 Normalize to length 1 3 Update the set7 replacing fk with qk Lets see how this works7 in all its gory detail 31 The algorithm Step 1 1 Orthogonalize There are no vectors before f1 in the list7 so there7s nothing to do this time around 2 Normalize Set 1 Ch ifl ll ll With foresight to the QR algorithrn7 l7ll store r11 Hflll so that f1 qim 3 Update Replace f1 with ql So now my set of vectors is q17f27f377fn Steps 2 through 71 Ml use k to denote the current step lfl were prograrnniing this in Matlab7 Id be starting a for loop with the statement for k2 n At the start of the kth step7 my set of vectors is 1170127WQk717 fkvfk177fn 21 orthonormalized unchanged 1 Orthogonalize I need to orthogonalize fk to the vectors q1 q2 7qk1 This is actually done using the exact same method I mentioned for mea suring how close a vector is to a subspace 1 break fk up into k parts The rst 71 parts like along q17q27 qk4L7 and the kth part is orthogonal to all these vectors fk qirik 127 Clkilrkil E 22 The part l7m interested in is E To nd this7 1 rst calculate the Ws with a short loop for 239 1 k 7 1 77k fk 39 01 end Then E is just E fk 0117M 127 i 39 39 39 qkilrkil 23 2 Normalize This is just scaling E to length 1 1 Clk iE 24 HEN Looking ahead to QR7 l7ll want to store rkk 3 Update Finally7 I replace fk with qk My set of vectors now reads 01170127n7Qk7 fk177fn 25 W orthonormalized unchanged 32 Relationship to the QR algorithm The QR algorithm7 which we7ll see pretty soon7 is based on bookkeeping the relationship between the f7s and the q7s This has a particularly nice structure 3 CT f1 011711 f2 011712 127 22 f3 011713 127 23 13733 D I AAAAA D D u 00 fn qlrln 127 2n quSn q m 6 You can probably see it above better than I can say it7 but each fk only depends on q17q27qk 4 An example 41 Orthonormalizing The numbers involved in orthonormalization get nasty really fast Its really more something to program than to do by hand All the same7 its good to try out by hand7 just to get a feel for how it works For this example7 1711 take a subspace of R3 SV R3 3017022U30 31 Without too much work7 I can come up with two basis vectors for S 1 0 f1 3 7 f2 2 32 0 1 These aren7t orthonormal7 though7 so this is where Gramm Schmidt comes in Step 1 1 Orthogonalize There7s no orthogonalization to do on this rst step 2 Normalize 1 scale f1 down to length 1 1 1 1 q if 7 3 33 1 Will 1 m 0 3 Update Replacing f1 with ql my set of vectors is now qL7 f2 Step 2 1 Orthogonalize I need to orthogonalize f2 to ql To do this7 I break it up as f2 011712 E 34 where E is the orthogonalization 7 the part of f2 that7s perpendicular to ql To nd E7 I rst need r12 which is given by 0 110 6 T12 f2 39ql 2 3 7 1 0 V 10 This lets me nd E E f27q1r12 0 1x10 6 2 7 3x10 7 37 1 0 10 706 02 38 1 2 Normalize Now I scale E to have length 1 The numbers get sort of nasty7 so I7ll start using decimal approximations 1 1 706 705071 12 7E 7 2 01690 39 11832 1 08452 3 Update I replace f2 with 12 So my set of vectors is 1 10 705071 q17q2 3110 01690 40 0 08452 42 Measuring distance from S 2 Consider a vector that7s not in the subspace S like b 3 Let7s 4 try to gure out how far b is from S I7ll try to post a picture of 8 the geometry of this over the weekend This is the same thing as determining how far the point 2 3 4 is from the plane 37y22 0 To measure the distance of S l7m looking for E in the decomposition b 51011 52012 E 41 where the error E is orthogonal to ql and 12 To nd E 1 rst need to get 61 and g These come from some simple dot product action 2 1x10 11 61 bq1 3 3 V10 7 x 34785 42 4 0 V 10 2 705071 62 bq2 3 01690 28735 43 4 08452 Then I can calculate E E b51Q152Q2 44 2 1 10 705071 3 734785 3x10 728735 01690 45 4 0 08452 23571 707857 46 15714 Finally to measure how close b is to S 1 take the length of E HEM m 294 47 AMATH 3527 LECTURE 2 JUNE 257 2008 1 Mathematical concepts of the day 11 Standard basis for R2 The rst thing I want to look at today is the standard basis for R2 The key idea here is that you can build up an entire vector space using linear combinations of just a handful of vectors This makes linear transformations tons easier to deal with The standard basis vectors of R2 are vii em Then the standard basis of these two vectors is the set E eheg The curly brackets are just there to group el and e2 together to form a single object that is77 a basis Otherwise l7d have to say that el and e2 are a basis7 which just plain sounds weird Calling E a basis means two key things 1 Any vector in R2 is a linear combination of the vectors in E For any ill i vector V 7 I can write 12 vllvll lv2ll In other words7 E spans R2 No vector in E is a linear combination of the other vectors in E In other words7 E is linearly independent 3 The point of this condition is to use the smallest possible number of vectors to construct the full vector space 12 Linear transformations I touched on linear transformations brie y in the rst lecture ljust want to recap this7 and esh it out a little more Say that T is a transformation that takes a vector in R2 and gives me back a different vector in R2 l7ll denote this T R2 a R2 and when I talk about this later7 l7ll probably end up saying T maps R2 to R2 These are just different ways of saying the exact same thing The transformation T is linear if 1 For any vectors u and V7 Tu V Tu TV 3 In other words7 T preserves sums D For any vector V and any scalar a7 Tav aTV 4 In other words7 T preserves sailings I just noticed that the word scalar7 probably comes from the fact that a scalar scales things Neato 2 Rotations 21 How to calculate the rotation of a vector Now lets gure out how to rotate a vector Say that R9 is the function that rotates a vector in R2 counterclockwise by 6 radians Given a vector V E R2 the result of R9V looks like this RMquot The main question is7 if V 1 7 then how do I nd the components of 2 R9V7 This is a potentially character building problem Fortunately7 there7s a trick Say I decompose V into standard basis vec tors7 V Ulel Ugeg So the picture looks like this U282 When I apply R9 this rotates the entire picture 3 R9ltVgt R6U191 05 R6U192 9 705 05 l 15 2 25 This shows me that R9V R9lt 161 vzez 5 R9mel R912 2 6 This tells me that R9 preserves sums The other key thing is that R9 preserves sealing so l7ve got Re ned U1R9e17 R9 262 2R962 7 The picture behind this is pretty straightforward so PM let you ll this in for yourself Whats the point of all this Well now l7ve shown that R9V 111R9e1 2R9e2 8 And this says that no matter what V is I can gure out R9V as long as I know R9e1 and R9e2 So this entire function R9 is determined by only two pieces of information All that7s left is to gure out R9e1 and R9e2 With a little bit of work and pictures l7m too lazy to make gures for at the moment R9031 cos 7 R9032 isin 39 9 sin0 cos 6 4 So 1 wind up with Rm 111 COS 6 W Sm 9 10 cos 6 Booyah 22 Matrix representation of rotation You7ve probably seen matrix multiplication before But forget about that for a minute Lemme just take a look back at 10 and think about how I can make my life convenient First off7 see those two vectors there l7m gonna stick them in a matrix 1711 call it R9 So 11 Now say I dont even remember how matrix multiplication is de ned l7m going to de ne it how I feel like it l7m going to make the matrix vector multiplication Rev give me the answer I want cos6 isin6 Rev vl Sin 0 112 J 12 cos 6 7 cos6 isin6 97 sin6 cos6 39 Bam So Rev R9V That7s why R9 is the matrix representation of R9 lf you7re feeling skeptical7 you can check that the way ljust de ned matrix multiplication does7 in fact7 agree with the usual de nition 7 cos 6 7 sin 6 01 Rev 7 sin 6 cos 6 l 112 13 7 111 cos 6 7 02 sin 6 01 sin6 02 cos6 7 v1 cos6 M2 isin6 39 15 sin 6 cos 6 But personally7 I like the other way of thinking about it way better It underscores two big things 0 Matrix multiplication just adds together the columns of R9 weighted by the components of V o The columns of R9 are what the rotation R9 does to the standard basis vectors el and eg 3 Scalings 31 Scaling in the horizontal and vertical directions Scalings are the other basic type of linear transformation The simplest way to do it is by scaling in the horizontal and vertical directions which are the same direction of the standard basis vectors To be speci c say I want to scale by a factor of 2 in the horizontal direction of e1 and by a factor of 13 in the vertical direction of e2 l7ll call the transformation that does this S Then l7m constructing it so that Se1 Se2 133 16 This means that the matrix representation for S is 2 0 0 13 m 4 Matrix multiplication does not commute Seeing this with crappy vector art 41 Thinking about the order of linear transformations One pretty basic fact about matrix multiplication is that it doesnt commute More speci cally say that A and B are both 2 gtlt 2 matrices In general AB y BA 18 If you7ve taken linear algebra before this has probably been beaten into your head a bunch of times You can look at this as a purely algebraic fact But personally lthink its a lot cooler to think about it in terms of linear transformations Speci cally 6 I7m going to look at the transformations RW4 and S that I de ned earlier and their matrix representations For4 and S What doe it mean to say that as matrices PMS y SEW 19 Well say I take a vector V E R2 Then considering the left hand side of 19 the product RW4SV indicates that I apply the scaling S and then the rotation RW4 On the other hand considering the right hand side of 19 the product SEW4V indicates that I apply the rotation Rw4 and then the scaling S If Par48 31 SEW4 then I should get different outcomes based on the order I apply these two transformations 42 Visualizing in CVA I made a crappy picture of a cat stored in the structure Cat First lets see what happens if I scale then rotate This corresponds to Fur45 To do this I rst de ne gtgt theta pi4 gtgt R costheta sintheta sintheta costheta Then I7ll run the following script Z Plot the kitty in in the first window of a 1X2 subplot subplot121 plotcurvesCat axis 2 2 2 2 pause Z Scale the kitty Cat2 applytransformationCat S plotcurvesCat2 axis 2 2 2 2 pause Z Rotate the kitty Cat2 applytransformationCat2 R plotcurvesCat2 axis 2 2 2 2 Now lets see what happens if I rotate7 then scale This corresponds to SEW4 PM do this with another script7 which adds to the existing subplot Z Plot the kitty in the second window of a 1X2 subplot subplot122 plotcurvesCat axis 2 2 2 2 pause Z Rotate the kitty Cat2 applytransformationCat R plotcurvesCat2 axis 2 2 2 2 pause Z Scale the kitty Cat2 applytransformationCat2 S plotcurvesCat2 axis 2 2 2 2 Looks pretty different7 eh And no7 l7rn not going to show you the pictures here If I did everything for you7 you7d have no reason to come to class7 would you AMATH 352 LECTURE 16 AUGUST 1 2008 1 Matrix products 11 Motivation and de nition We7ve seen the product of a matrix M with a column vector V a Zillion times over by now The idea behind the product of two matrices is just to add more column vectors in the same place as V lf W is another column vector of the right size of course then I can stick it next to W to form a matrix V W Then to take the product of A with this new matrix I just apply M to the columns individually MV W MV MW 1 I can do this with as many columns as I like Wording this a bit more generally say that M is matrix with dimensions 7 gtlt k and N is a matrix with dimensions k gtlt d Then their product MN is de ned by MN MN1 N3 N3 Nd MAM MN2 MN3 MNd l7m borrowing Matlab7s notation for the columns of N so NJ is the rst column of N N is the second column and so on For the product MN to be de ned the number of column of M has to equal the number of of N This is just so that each column of N is a column vector with the right dimensions to be multiplied by M 12 Example To see an example of this in action lets take to random matrices in Matlab gtgt M rand53 gtgt N rand34 First lets see what happens when we multiply the columns of N individually by M gtgt MN1 MN 2 MN3 MN4 Compare this to what we get using Matlab7s built in matrix multiplication gtgt MN They7re the same Sweet 2 Where are matrix products used Earlier on we played around with using matrix products to do several linear transformations in a row In the context of linear systems though there7s one main reason that matrix products are useful First remember that a matrix vector product MV uses the components of V to make a linear combination of the columns of M Now suppose that M is an r gtlt d matrix Let N be a d gtlt d matrix which is really just d column vectors in a row PICTURE HERE Each column of N is a way of recombining the columns of M into a new vector The entire matrix product MN takes the d columns of M and reassembles them into d new columns In a certain sense then the product MN rearranges77 the columns of M Now say l7m trying to solve a linear system Ax b I might be able to write A as a matrix product A MN where M and N are somehow simpler than A The word simpler7 is subjective of course but mostly Id like for the system MNX b to be easier to solve than the original system Ax b 3 LU decompositions 31 An example Consider the system Ax b with 123 1 A2811 b1 2 7327 1 As it happens 1 can write A as a matrix product A LU 100123 ALU 210 045 3 7321006 The matrices L and U are of special forrns L is lower triangular meaning that it only has nonzero elements on or below the diagonal Meanwhile U is upper triangular meaning that it only has nonzero elements on or above the diagonal lf I replace Ax b with LUX b 1 can nd x by solving two systems of equations both of which are really easy 1 First let y Ux ldon7t know what x is just yet but UK will de nitely be a column vector so l7rn just giving it a name of its own Now l7rn looking at the system Ly b Written out in full this is 1 yl 1 2 1 yg 1 4 73 2 1 yg 1 This translates to a really simple system of equations 91 1 291 12 1 5 391 27J2 7311 1 This can be solved right away by forward substitution The rst equation gives me a value for yl l substitute this into the second equation and solve for yg Then 1 substitute the values for yl and y2 in the third equation and solve for yg 911 2y21 gt 21 6 2 Now that l have y 1 nd x by solving the system Ux y Written out in full this is 1 2 3 1 1 4 5 x2 71 7 6 3 6 This translates to the system of equations 1 22 gig 1 4x2 5 71 8 63 6 This system can be solved by back substitution which is the same thing as forward substitution but going backwards through the vari ablesequations The last equation gives me 33 Then I substitute the value of 33 into the second equation and solve for 2 Finally I substi tute the values of 2 and 33 into the rst equation and this gives me 1 63 6 gt 3 1 4x2571 gt 2732 9 gt 11 This gives me my solution vector 1 x 732 10 1 And if you like you can check that Ax is in fact equal to b 32 Key ingredients of the LU factorization Here7s a more general recap of what just happened The key components of an LU factorization A LU are 1 A lower triangular matrix L Systems involving L can be solved by forward substitution 2 An upper triangular matrix U Systems involving U can be solved by back substitution Once A has been factorized like this any system Ax b can be solved in two easy steps As easy as you get when solving linear systems anyway 4 Finding an LU decomposition 41 Example Let7s look at a speci c example of nding an LU decomposition This is really something that makes more sense computationally than when working things out by hand But hopefully this7ll help outline some of the organizational steps involved 1711 nd the LU factorization A LU 11 for the matrix A on the left of this equation 3 2 i1 1 Um U12 U1s 79 i5 1 L11 1 U2 U2s 12 9 7 7 L31 L32 1 U33 A L U If you just multiply everything out and hack away7 this gets kind of painful The key thing is to work on this one column at a time Column 1 Consider how the rst column of A is formed using the product on the right A1 L1U11 13 The rst column of A is simply the rst column of L7 scaled by Um When I write this out7 I get 3 l 9 L11 U13 9 L31 Matching the rst components gives me Um 3 Then I can solve for this column of L all at once 1 1 L11 3 L31 3 This last step is a little more striking with larger matrices If the system were 50 gtlt 507 this step would be giving us 49 elements of the matrix L Column 2 Matching up the second columns in 127 A2 L1U12 L2U22 16 When I ll in the parts that I know7 I get 2 I 0 5 3 U1 1 U22 7 3 L32 Matching up the rst component on each side gives me U12 2 Then I can subtract the rst vector on the right from both sides 0 1 1 U22 18 1 L32 Now I can match up the second components to get U22 1 And this lets me solve for the second column of L 0 0 1 1 19 L32 1 Column 3 Matching up the third columns of 127 A3 L1U13 L2U23 L3U33 20 Plugging in components for the vectors7 1 I 0 0 1 3 U1 l 1 U13 0 U33 7 3 1 I I can match up the rst components right away to get U13 71 Then I subtract the rst vector on the right from both sides to get 0 0 0 2 1 U13 0 U33 4 71 1 Now I can match up the second components to get U13 72 subtract the rst vector on the right from both sides 0 0 0 0 U33 2 1 And this gives me U33 2 Now I have the complete LU factorization 3271 1 3271 79751731 172 79 77 7 73 71 I 2 z quot x A L U 42 Overview Again7 I 23 24 This looks a bit daunting7 but the process is really pretty simple Say the matrix A is d gtlt d Then For each column k 1727 d 7 1 Solve for the elements of U in column k going from top to bottom 2 Find column k of L all at once

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.