Applied Linear Algebra
Applied Linear Algebra MATH 2210
Popular in Course
verified elite notetaker
Popular in Mathematics (M)
verified elite notetaker
This 6 page Class Notes was uploaded by Mary Veum on Thursday September 17, 2015. The Class Notes belongs to MATH 2210 at University of Connecticut taught by Gerald Leibowitz in Fall. Since its upload, it has received 43 views. For similar materials see /class/205827/math-2210-university-of-connecticut in Mathematics (M) at University of Connecticut.
Reviews for Applied Linear Algebra
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/17/15
Some notes on Section 46 The Rank of a Matrix The column space of a matrix is the set of all linear combinations of the columns of the matrix we have observed that the dimension of ColA is the number of pivot columns ofA That dimension was called the rank of A in Section 29 of the text although it should probably have been called the column rank The row space of A is de ned in the same way RowA is the set of all linear combinations of the rows of A One way to think of it is as the column space of the transpose of A Before we state the dimension fact for RowA we need a couple of lemmas about subspaces If H is a subspace of a vector space V and if S is a subset of H then SpanS Q H Proof If m1 mp are members of S and if 01 0 are scalars then clzl 3me is a linear combination of members of H since S Q Since H is a subspace of V that sum is also a member of H Thus H contains every member of SpanS In a vector space if 51 Q Span 2 and 52 Q Span 51 then SpanSl Span SQ Proof if we apply the preceding fact with S 1 and H Span 52 we conclude that SpanSl Q Span SQ If we apply the fact with S 2 and H Span 51 we conclude that SpanSg Q Span 51 Since each is a subset of the other the two span sets are equal Theorem 413 a HA and B are rowequivalent matrices then RowA RowB b If B is in echelon form its nonzero rows form a basis for RowB Proof of a For each type of elementary row operation 5 it is easy to check that each row of eA is linear combination of rows of A so the rows of eA form a subset of RowA We go from eA to A using 5 1 which is also a row operation so the rows of A form a subset of RoweA Hence by the second of the preceding lemmas A and eA have the same row spaces But if A N B then we go from A to B using a nite sequence of elementary row operations since the row space stays the same after each operation it follows that RowA RowB Putting the two parts of this theorem together we have Corollary If A and B are rowequivalent matrices then the nonzero rows of B form a basis for RowA This gives us a way to produce a basis for the row space of any matrix by using row reduction 1 Eg in Practice Problem 2 in 43 we found that A 73 2 72 78 has echelon form 4 71 3 9 1 6 2 74 B 0 20 4 720 SoonebasisforRowAis16 2 74 0 20 4 720 0 0 0 0 If we call the dimension of RowA the row rank of A we see from Th 413 that the row rank of A equals the number of non zero rows of the echelon matrix B ie Row RankA Col RankA A matrix has as many linearly independent rows as it has linearly independent columns This is part a of The Rank Theorem 414 Proof Row RankA number of non zero rows of B number of pivots in B Col RankA Some notes on Section 46 continued 1 If A is the matrix whose columns are what is the dimension of the row 1 2 0 3 7 71 7 0 space of A In 45 we noted that the columns of this matrix A are a basis for the span of the columns of A which is why the answer to Problem 4 in 45 is that the subspace there has dimension 2 Without calculating we have dim RowA Row RankACol RankAdim ColA 2 So if we were to calculate and nd a basis for the row space of A we would nd that our basis has two members Row equivalent matrices have the same row spaces and the same null spaces So we can nd the row and null spaces of any matrix by rst reducing to an echelon form and then computing with the echelon matrix Example Consider the matrix whose columns are the column vectors in 45 Exercise 12 73 78 73 A 72 4 6 0 As mentioned in an earlier handout this matrix is row equivalent to 0 1 5 7 1 73 78 73 B 0 72 710 76 Since there are exactly 3 pivot locations in B A has row rank 3 0 0 0 4 And since there is exactly one free variable A has nullity 1 Let s nd a basis for the row space of A Actually there s no work to do The non zero rows of the echelon form B are the members of a basis for RowA So one basis for RowA is 11 r2 r3 withr1173 78 73r20 72 710 76andr30 0 0 4 Now let s nd a basis for NulA We must solve the equation Ax O which is equivalent to Bx 0 It s probably best to reduce B further to reduced row echelon form The reduced row 1 0 7 0 echelon form turns out to be 0 1 5 0 so for the null space ml 7732 75 4 0 0 0 0 1 77 and 3 is a free variable So the one vector set 7i is a basis for NulA Comments on Exercises in Section 46 The Rank of a Matrix The exercises in 46 tend to be thought questions rather than calculation questions testing our grasp of the theory presented in the section 8 A matrix A is 5 x 6 and has four pivot columns So ColA has dimension 4 the number of pivot positions and NulA has dimension 6 7 4 2 All vector spaces of dimension 4 are isomorphic to 724 as we saw in the section on coordinate systems 44 So abstractly ColA is a copy of 724 But ColA is in R5 so it is a four dimensional subspace of R5 but is not 724 9 If the null space of a 5 x 6 matrix is 4 dimensional then its column space is 6 7 4 2 dimensional There are 4 free variables and two basic variables for the matrix 12 If the null space of a 5 x 6 matrix is 4 dimensional then its row space has dimension equal to its rank which is 6 7 4 2 So only two of the ve rows are linearly independent 14 If A is a 4 x 3 matrix then since the dimension of a subspace is less than or equal to the dimension of the entire space the row rank of A is at most 4 and the column rank of A is at most 3 Since the two ranks are the same the largest possible dimension of RowA is 3 The same is true if A is a 3 x 4 matrix 16 If A is a 6 x4 matrix then NulA is a subspace ofR4 But since rank A dim NulA 4 while the rank can be any value from 0 through 4 the dimension of the null space of A can be anything from 4 down to 0 So the smallest possible dimension of NulA is 0 One simple case where zero is achieved Let A be the matrix whose rst four rows form 4 and whose other two rows are all zeros Then A is 6 x 4 but Ax 0 is satis ed only by x 0 So NulA 0 and has dimension zero 19 It is given that the solutions of a certain homogeneous system of ve linear equations in six unknowns are all multiples of one non zero solution Translated into a matrix equation we are given the fact that A is 5 x 6 and that Ax 0 has a onedimensional set of solutions Thus NulA has dimension 1 Since RankA 1 6 RankA 5 ColA is subspace of R5 and its dimension is 5 Since the dimension of a subspace of a nitedimensional vector space can equal the dimension of the space only if the subspace is the entire space it follows that ColA R5 Similarly in Problem 20 the null space has dimension two and the number of columns is eight so the column space which is the range of the associated matrix transformation has dimension six Since it is a part of R6 and its dimension is six it is all of R6 51 52 EIGENVALUES EIGENVECTORS EIGENSPACES CHARACTERISTIC EQUATION If A is a square matrix a scalar 9 is called an eigenvalue of A if there exists a column vector X 0 such that A X 9 X If this is true then each such nonzero X is called an eigenvector for A associated with 9 There are other standard phrases that are used for these concepts An eigenvalue is also called a characteristic value or a latent root An eigenvector is also called a characteristic vector Since for a speci c 9 the set of all solutions of A X 9 X is a subspace we can call the set of all X including the zero vector which satisfy A X 9 X the eigenspace associated with 9 Notice that if 9 0 then 9 is an eigenvalue of A if and only if the homogeneous system with A as coefficient matriX has at least one nontrivial solution Thus 0 is an eigenvalue of A if and only if A is singular ie not invertible Let s consider some examples 1 0 E l 1 L A xamp e et 0 2 1 0 A X 9 X means that O 2 condition is x1 9 x1 and 2X2 9 3C2 If x1 0 then it follows from the first requirement that we must have 9 l and then from the second requirement 2X2 3C2 so that X2 0 So 9 l is an eigenvalue of A and the associated eigenspace consists of all vectors in R2 of the x1 3 1 9 9 x1 M1 S th 39 1 2x2 M2 o ee1genva ue x2 x2 x1 0 x1 0 then since x2 can t also be zero after all X is assumed not to be the zero vector it follows from the second requirement that 9 2 So 9 2 is the other eigenvalue for A and the associated eigenspace consists of all vectors in R2 of the 0 3 2 form 1 with x1 a free variable So OH is a basis for this eigenspace But if 0 form with x2 a free variable And I is a basis for this eigenspace In the same way one can show that for every n by 14 diagonal matriX the eigenvalues are the diagonal entries and for each k from 1 through 14 the column vector ek is an eigenvector associated with the kth entry of the matriX 1 1 E l 2LtA xampe e O 2 1 1 x1 9 x1 Ax9 X means that 9 0 2 x2 9 x2 eigenvalue condition is x1 X2 9 x1 and 23C2 9 X2 Now the second of the two 1 So the x1 x1 x2 9 x2 2x2 requirements is simpler than the rst one Either x2 O or x2 0 In the rst case the rst requirement becomes x1 7 x1 and since x1 0 we must have 7 1 So 7 l is an eigenvalue of A and the associated eigenspace consists of all vectors in R2 of the form x1 0 it follows from the second requirement that 7 2 So 7 2 is the other eigenvalue for A The requirement that x1 x2 7 x1 is now x1 x2 2 x1 This simpli es x2 x1 Thus the eigenspace associated with 7 2 consists of all vectors in R2 of the form x 2 x 2 It seems reasonable to guess that the diagonal entries of every upper triangular square matrix are eigenvalues of the matrix This turns out to be true as we shall soon see But the next example shows that a repeated entry on the diagonal doesn t necessarily get repeated as an eigenvalue 1 with x1 a free variable So 0 is a basis for this eigenspace But if x2 0 then with x2 a free variable And 1 1 is a basis for this eigenspace 21 02 2 1 x1 x1 2x1 x2 7 x1 o zlleille Hm eigenvalue condition is a 2x1 x2 7 x1 and b 2x2 7 x2 If x2 O requirement a becomes 2x1 7 x1 and since x1 0 we must have 7 2 So 7 2 is an eigenvalue of A If x2 0 then it follows from requirement b that 7 2 So whether x2 O or x2 0 7 2 So this matrix has just the one eigenvalue 7 2 In order to nd the eigenvectors replace 7 by 2 in a and b They become a 2x1 x2 2x1 and b 2x2 2 x2 Which says exactly that x2 O and that x1 is a free variable So for this matrix 7 2 occurs twice Example 3 Let A A x 7 x means that 2x2 So the 1 algebraically but has only a onedimensional eigenspace namely Span 0 0 1 10 the real number system In fact A x 7 x means that a x2 7 x1 and b x1 7 x2 If 7 is to be an eigenvalue at least one of the xi s isn t zero But if either one were 0 a and b imply that both would be zero So neither is These a and b are very different from the a and b in the previous examples because of the way they are coupled To unscramble them we can take the expression in b and use it to express a in terms of just x2 Namely x2 w x2 7 2 x2 Since x2 0 canceling is justi ed so 7 2 1 Since the square of a real number cannot be negative this matrix has no eigenvalues and eigenvectors Example 4 Let A It turns out that this matrix has no eigenvalues in If a matrix is given is there a systematic way to determine how many eigenvalues it has Is there an upper bound that limits the number of eigenvalues that an n by 71 matrix may have The answers or partial answers lie in a function called the characteristic polynomial of the matrix The matrix equation A x A x can be expressed as A x A x 0 This doesn t look like a system of equations but by expressing x as Ix we may rewrite the eigenvector condition as A Ix A x 0 AI Ax 0 The preceding equation represents a homogeneous system of linear equations whose coefficient matrix is A I A An eigenvector is a nontrivial solution of this system and one or more will exist if and only if the coefficient matrix is singular By the Invertible Matrix Theorem Therefore A is an 39 39 of A if and onlv if the matrix A A is not invertible This can be made quantitative since we know that a square matrix is invertible if and only if its determinant is not zero A is an 39 39 of A if and onlv if the determinant ofKI A is 0 A 1 0 In the examples it is easy to nd the determinants In Exl MiA 0 K 2 So the characteristic equation is wl 72 0 whose roots are Fl and F2 In Ex 2 the characteristic equation is again wl 72 0 since the determinant of a triangular matrix equals the product of the diagonal entries In Ex 3 the characteristic equation is Owa 0 and has only the one solution A 2 What goes wrong in Ex 4 In that example M 7A 1 The characteristic determinant is A2 l and the characteristic equation A2 l 0 has no solutions in the real number system In general the eigenvalues of a two by two matrix can be found by solving its characteristic equation using the quadratic formula if necessary For other matrices of fairly small size if the characteristic polynomial detOJ A 0 can be solved either by factoring or by numerical approximation to the roots we can determine the eigenvalues either exactly or approximately And in the numerical analysis of matrices many indirect methods have been devised for finding the eigenvalues of a matrix to a good degree of accuracy Indeed one of the classic books on numerical linear algebra is called The Algebraic Eigenvalue Problem But that s for another course not this one What kind of equation is a characteristic equation detOxJ A 0 Since the diagonal entries of the matrix M A are of the form A 7 akk and one of the terms in the sum which equals the determinant is the product of the diagonal terms and since all the other terms are products of fewer than n of the diagonal entries multiplied by certain entries ofiA detOJ A 0 is a polynomial equation whose term of highest degree is Aquot A polynomial equation of degree n has at most 71 different solutions so every n by 71 matrix has no more than 71 different eigenvalues
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'