Diff Equations & Matrix Alg I
Diff Equations & Matrix Alg I MA 221
Popular in Course
Popular in Mathematics (M)
This 9 page Class Notes was uploaded by Eldon Trantow on Monday October 19, 2015. The Class Notes belongs to MA 221 at Rose-Hulman Institute of Technology taught by Staff in Fall. Since its upload, it has received 16 views. For similar materials see /class/225073/ma-221-rose-hulman-institute-of-technology in Mathematics (M) at Rose-Hulman Institute of Technology.
Reviews for Diff Equations & Matrix Alg I
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 10/19/15
Supplemental Notes on Matrix Algebra for M14221 Jerry R Muir Jix Septei nber 27 2001 1 Review of Terminology We begin by recalling some terms covered in the text Recall that a subset V of R is a subspace of R provided that x y is in V whenever both x and y are in V and ex is in V whenever x is in V and c is any scalar Suppose that x1 xn are vectors in R A linear combination of these vectors is a vector x in Rm given by xc1x1 ltltxn for some scalars 1 c The vectors are linearly independent if the only linear combination of the vectors that sums to 0 is the combination where every scalar is 0 The span of the vectors is the set of all linear combinations of the vectors The span of a set of vectors is a subspace If V is a subspace of R then a for V is a set of linearly independent vectors whose span is V The dimension of V is the number of vectors in a basis Further explanation and examples of these topics are found in the text 2 Matrices as Transformations Recall from calculus that a function f defined on a set of real numbers is a process by which every element a in the set of definition the domain is assigned a unique number The collection of all numbers for every 1 in the domain is called the range of Example 21 Let m2 and lnm Then has a domain of all real numbers R and a range of all nonnegative reals while g has a domain of all positive reals and a range of R It turns out that matrix multiplication is a function on vectors If A is an m X n matrix then for every input vector x in R A yields a vector in R For this reason we consider A a function spec cally a linear transfor mation Certainly any vector in R can be multiplied by an m x n matrix and therefore the domain DA of the matrix A is all vectors in R The range the set of all vectors Ax is not necessarily all of the vectors in R Denote the range of A by RA Example 22 Let A be the 3 X 2 matrix given by 1 A20 0 CHO Then DA R2 and RA is the set of all vectors in R3 with third com ponent 0 This is easy to see because if x 1302 then A 1302 0 3 The Range as a Subspace Suppose that yl and yz are vectors in RA for a matrix A There must be vectors x1 and xz in the domain of A such that Axl y1 and sz y2 Then since the domain of A is a vector space x1 xz is in the domain of A Notice that AX1X2 Ax1sz y1 yz Therefore yl yz is in RA Similarly if c is a scalar constant then cxl is in the domain of A and Ax1 Axl y1 and thus yl is in RA This shows that RA a subspace In fact if A is an m x n matrix then RA is a subspace of R Example 31 For the matrix A of Example 22 we saw that RA consists of all vectors y in R3 such that 93 0 Here 93 is the third component of y It is easy to check that this describes a subspace of 39 Since RA is a subspace and subspaces are most easily described using a basis we wish to have a basis for RA The key to determining the basis for the range of A is to recall that the product A of A and a vector x is a linear combination of the columns of A with coe icients from x That is if a1 an are the columns of A and an mn are the components of x then Ax m1a1m2a2 4 H mna Therefore any vector in RA is a linear combination of the columns of A This does not mean that the columns of A necessarily form a for RA Example 32 Suppose that A is the 2 x 3 matrix given by A 1 0 2 0 1 3 Then clearly the columns of A are linearly dependent The third column is 2 times the first plus 3 times the second Therefore the columns cannot be a basis for the range Of course a basis for a subspace of R2 cannot have more than two vectors However the matrix 2 4 B f 3 6 does not have linearly independent columns either In fact given x why392 1 Bx2m13m2 2 Therefore RB is the onedimensional subspace of R2 spanned by 1 2 After performing Gaussian elimination on a matrix A we are left with a matrix in echelon form The first nonzero entry in a row is called a pivot The columns of the echelon matrix containing the pivots are called piuot columns The columns of A that become piuot columns after Gaussian elimination form a for RA This is because linear combinations of columns are preserved during Gaussian elimination That is if one column is a linear combination of others in the echelon matrix then that column in the original matrix is a linear combination of the same columns with the same coe icients Example 33 Let the 3 X 4 matrix A be given by 2 A 2 6 1 5 2 6 3 7 After Gaussian elimination we arrive at the echelon matrix 1 3 4 2 0 0 9 9 0 0 0 0 The pivot columns of this matrix are the first and third columns Therefore 39RA is the 2dimensional subspace of 1R3 with basis 1 2 2 41 3 4 The Nullspace In the previous section we discussed one subspace related to a matrix A Another important subspace related to A is the nullspace This is the set j A of all solutions to the homogeneous equation A 0 Before going any further let us verify that j A is indeed a subspace Let x1 and xz be vectors in j A Then Ax1x2Ax1Ax2000 and thus x1 x2 is also in j A If t is a scalar constant then Acx1 Ax1 0 0 and hence x1 is in j A Thus j A is a subspace specifically a subspace of the domain Finding j A is something we already know how to do because we are just solving a matrix equation Recall that when performing Gaussian elimination to solve Ax 0 we arrive at an echelon matrix For every column in the echelon matrix that is not a pivot column we get a free variable These free variables will accompany the basis vectors for j A Example 41 Let A be the matrix defined in Example 33 We determined the echelon matrix from A in that example The solution to the system then comes by letting m2 s t and solving for the remaining variables to get 3 t and m1 35 2t In vector form this is 35 2t 3 2 s I 1 t 0 t 0 0 t1 1 Therefore all vectors x in N39A are linear combinations of 3100 and 2 0 1 1 These vectors form the basis of j A 5 Rank and Independent Equations In the previous two sections we discussed the range and the nullspace of a matrix Notice that when computing either of these spaces we rely on the echelon matrix arrived at from Gaussian elimination Specifically for every pivot column of A we get a basis vector for RA For every nonpivot column of A we get a free variable and hence a basis vector for 39A We have now spoken for all of the columns of A The number of basis vectors of RA plus the number of basis vectors of j A equals the number of columns of A which in turn is the number of components in any vector that A can multiply In other words we have the relationship dim39RA dimN A dim DA Example 51 Let A be the matrix defined in Example 33 We saw in that example that the basis of RA has two elements and therefore dim RA 2 In Example 41 we see that the basis of j A also has two elements and thus dimAA 2 Since the domain of A is 33 which has dimension 4 this illustrates the above equality The rank of a matrix A is the number of pivots in the echelon matrix derived from A through Gaussian elimination Notice that the rank of A is the same the dimension of RA A matrix A has full colmrm rank if every column of A is a pivot column In this situation every column of A is a basis vector for RA Since there are no free variables j A A matrix A has full row rank if every row of A contains a pivot In this case there is no row of 0 s in the echelon matrix and therefore there will be no impossible equations Thus Ax b will have a solution for every vector b In other words if A is m x n then RA Rm Example 52 The matrix A of Example 22 has full column rank and the matrix A of Example 32 has full row rank If a matrix A has both full column rank and full row rank then A has full rank In this case A must be square Since every column is a pivot column det A 0 and A is nonsingular Thus the rank of a square matrix is another of the many properties of the matrix related to its invertibility 6 The General Solution to a System As we know the central problem in matrix algebra is the solution to the equation A b for a matrix A and vector b of appropriate size Let us suppose for the moment that this equation has at least one solution and choose a particular solution xp If x is a vector in AHA then set x x x It turns out that AxAxpAxnb0b This shows that x is a solution to the matrix equation Let s look at the opposite direction Suppose still that X is a particular solution to A b Now suppose that x is any other solution Then if x x xp then AxnAx Axpb b0 This shows that x we defined it is in j A To summarize given a particular solution X to A b x a solu tion to A b if and only if xp xn for some vector xn in j A Geometrically this means that the set of solutions to A b is obtained by adding the same vector X to every vector in the nullspace of A This is simply a translation of the nullspace The solutions set to A b is therefore parallel to the nullspace Example 61 For lack of originality and due to laziness of the author let us once again consider the matrix A defined in Example 33 Let b 1 22 Clearly xp 2 10 00 is a particular solution to A b In Example 41 we calculated that j A is the span of 3100 and 2 0 1 1 Therefore all of the solutions to A b are of the form 9 l m 1 0 xxpxn 0 0 OCH where a and l are any scalars 7 Least Squares Approximations Our chief goal when studying linear algebra techniques in this course is to be able to solve linear problems that arise in engineering or other applications Often these systems will have no solution Usually this is because there are more equations than variables Sometimes this can be attributed to observational error in experimentation Suppose that we are given a system A b for which there is no solution The next best thing we can do is to find some vector x such that A comes close to b possible Clearly Ax lies in RA and therefore we must find the vector p in RA that lies closest to b and then solve the system A p Let us simplify things somewhat Suppose that b is a vector in R and that V is a subspace of R that does not contain b What vector p in V is closest to b It is the vector p in V such that b p is perpendicular to V Recall that b p is the vector pointing from the head of p to the head of b Suppose yl yn form a basis of V Then b p must be perpendicular to each vector yk This translates to yk b p 0 In terms of transposes y b p 0 Now define Y to be the matrix whose columns are the vectors yl yn Then the transposed vectors yk form the rows of Y1 Therefore we have that YTb p 0 This is because each entry is equal to 0 from above Rewrite this Y l39p Y39l39b Now suppose that A b has no solution and we are trying to find the vector x so that A comes close to b possible With respect to the previous paragraph V RA and if the columns of A are independent then they form a basis for RA If p is the closest vector in RA to b then by the previous paragraph Up ATb Thus if x is the vector such that A p then ATAx Ab This describes x when the colunms of A are independent Similar reasoning will show that the same equation is true when the columns are dependent but the following method which finds a unique solution x will not work If ATA were nonsingular then we could solve for x Suppose that A has n columns Then ATA is n x n hence it is at least possible that it is nonsingular To see whether ATA is nonsingular we will examine N ATA If this nullspace contains only 0 then the matrix is nonsingular Suppose that ATAX 0 Multiply both sides by xquot xTATAx 0 The lefthand side of the above expression can be rewritten xTATAx Axquot39Ax Ax 4 Ax HAtz Therefore Ax 0 and hence Ax 0 Remember that A is a linear combination of the columns of A Thus x 0 because the columns of A are independent This shows that the nullspace N ATA is exactly It follows that ATA is nonsingular Recall that the vector x such that A comes close to b possible satisfies the equation ATAX ATb It now is possible to solve for x because ATA is nonsingular the least squares approximation to A b x ATAV ATb The reason this is called the least squares approximation is because we are attempting to minimize the quantity Hb Ax which can be done by minimizing Hb Ale This expression is the sum of the squares of the components of the vector b Ax and therefore we are searching for the least of the sums of squares Clearly this could be done using methods from severalvariable calculus specifically by using partial derivatives The following is an example of the least squares approximation Example 71 The points 1 2 0 1 12 and 23 in R2 do not lie on the same line Suppose that we want to find the linear function that comes the closest to passing through these points Such a function has the form t f 31 as I These points translate to the system of equations ab 2 b 1 ab 2 2ab 3 We write this system Ax b where 1 1 2 0 1 a 1 A 1 1 x ltl b 1 2 1 2 This system has no solution However since the colunlns of A are linearly independent we know that the least squares solution to the system is the vector x A39I39Ar A39l39b 2 1 2 1 1012 1 T0 1 1 1 1 1 1 2 Therefore the bestfit line is z f 31 7305 7 10