MATRIX ALGEBRA & APPLS
MATRIX ALGEBRA & APPLS MA 322
Popular in Course
Popular in Mathematics (M)
This 8 page Class Notes was uploaded by Kennith Herman on Friday October 23, 2015. The Class Notes belongs to MA 322 at University of Kentucky taught by Staff in Fall. Since its upload, it has received 7 views. For similar materials see /class/228153/ma-322-university-of-kentucky in Mathematics (M) at University of Kentucky.
Reviews for MATRIX ALGEBRA & APPLS
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/23/15
MA322 081117 Inner product Norrn angle Notes on Chapter 6 Given a real vector space V an inner product is de ned to be a bilinear map F V gtlt V 1 3 such that the following holds 0 For all111112 E V we have F111112 F112111 Cornrnutativity o For all 111112113 6 V we have F111112113 F111112F111113 Distributivity o For all 111112 6 V and c 6 3 we have Fc111112 cF111112 Scalar rnultiplicativity o For all 11 E V we have F1111 2 0 Moreover F1111 0 i 11 0 F111 c112 Notation We usually do not use a name like F but write lt 1110 gt in place of F1110 Often we also just write 11 10 and call it a dot77 product Warning Many books will de ne a more general inner product where the last property of positivity is not assumed in the beginning but later on irnposed because it is essential for de nitions of angles and lengths We now use the shortened notation lt gt for an inner product and de ne 0 1111112 lt 1111 gt or This is the length of the vector 11 for the chosen inner product so strictly speaking it should carry a marker indicating the inner product Here using a function name F helps us put such a marker and write o It can be proved that for any two vectors 11 10 we have 1 lt 1110 gt 1 S H11HH10HCauchy Schwartz Inequality Moreover we get equality i 11 10 are linearly dependent Further if 1110 are non zero vectors then 1 lt 1110 gt 1 implies that one of the following two things happens Either we have lt 1110 gt in case 1110 are positive multiples of each other or can be considered to be in the same direction or lt 1110 gt in case 1110 are negative rnul tiples of each other or can be considered to be in the opposite direction 0 We de ne the angle between non zero vectors 1110 by lt 11 10 gt 1110 arccos llvllllwll The Cauchy Schwartz inequality guarantees that we get a mean ingful angle between 0 and 180 degrees Warning One should not lose sight of the fact that this is de pendent on the chosen inner product and as before a marker F can be attached if necessary Examples Here are some examples on inner products in known vector spaces 0 The most common example is in 3 We de ne lt 1111 gt va This gives the usual dot product It is obvious that corre sponds to the usual length of a vector and for n 237 direct calculations can verify the angles to be consistent with usual con vention Still in 3 a more general inner product can be de ned by a symmetric matrix A Amm by de ning F lt 1111 gt DTA LU We may write lt 1111 gtA as a shortened notation7 or as an alter native drop all special references to A if no confusion follows A random choice of A will not satisfy the positivity condition It can be shown that a necessary and suf cient condition for a sym metric matrix A to de ne an inner product is that all its principle minors be positive This means all the determinants using rst few entries of the main diagonal are positive If we go to the space of polynomials Pn or even P7 the in nite dimensional space7 then we can de ne an inner product Clearly7 the interval can be changed to other nite intervals leading to di ferent inner products The above example can be generalized to de ne an inner product on the space Cab which is the space of continuous functions on the interval 17 The inner product is de ned as b Fftgt ftgtdt In the space of polynomials P 7 de ne an inner product thus Choose a set of distinct numbers 104117 an and de ne lt pt7 W gt paoqao Manama Mame This de nes an inner product A little thought shows that the map W n H Man is an isomorphism of Pn onto 3km and all we are doing is using the usual inner product in the target space lm to de ne our inner product This is a usual method of building new inner products Orthogonal sets Coordinate vectors Projections Given an inner product lt gt on a vector space V we say that a set of vectors 111 1 is orthogonal7 if for any 239 79739 we have lt vjmj gt 0 It is easily seen that a set of non zero orthogonal vectors are linearly independent Proof Suppose 111 1 are non zero orthogonal vectors and 01111 01 0 Take the inner product of both sides with some vi to get clltm7111gtCiltvi7mgtCTltvivrgtltvi0gt0 Clearly all but the term 01 lt Di li gt are zero Moreover7 lt imm gt7 07 so 01 0 Thus each 01 is zero and we have proved independence of our vectors This is the most important reason to study and use the inner product The set of vectors 111 117 is said to be orthonormal if it is orthog onal and also lt imm gt 1 for all 239 This last condition means that 1 for each 239 1r Vectors with norm length equal to 1 are said to be unit vectors Note that given any non zero vector 1 the vector iniH is always a unit vector Moreover7 if we take the plus sign7 then it is in the same direction as 1 and is in the opposite direction if we use the minus sign This gives a simple but useful observation Every nonzero vector 1 is of the form cu where u is a unit vector and c illvl If we have a set of 71 non zero orthogonal vectors7 111 1 in an n dimensional vector space V7 then7 in view of the above result7 they clearly form a basis B 01 02 1 l of V Moreover7 for any vector 1 6 V7 it is easy to nd its coordinate vector MB as follows Suppose we write 1 01111 Unin By taking inner product with vi and using the same reasoning as above7 we see that lt 111 gt 01 lt imm gt and thus 01 This de nes the coordinate vector lt111 gt 01 lt01701gt MB C ltvvngt ltvnwngt One of the main goals of Linear Algebra is to give ef cient methods to solve linear equations AX B In general7 if there are more equations than variables ie A has more rows than columns7 then the solutions may not exist However7 in many Scienti c and Statistical applications7 it makes sense to ask for an answer which makes the equation close to true as much as possible Associated Spaces If we have an inner product in our vector space then we can reformulate the problem of solution of AX B as nd a vector w such that MB 7 AwH is as small as possible This can be shown to be equivalent to nding a w such that B 7 Aw is orthogonal to each column of A If we are using the usual inner product in 3 then this is easily seen to be guaranteed by Normal Equations ATAw ATB From the properties of the inner product we can show that if the columns of A are independent then the matrix ATA is invertible See proof below Using this we get a formal solution w ATA 1ATB The vector Aw so obtained is geometriclly the projection of the vector B into the space Col A Proof that ATA is invertible Suppose if possible ATA is singular Then there is a non zero vector u such that ATAu 0 Then lt Au Au gt uTATAu uTATAu 0 Hence Au 0 But since columns of A are independent this implies u 0 a contradictonl Given an m gtlt 71 matrix A we know the two associated spaecs Col A and Nul A which are respectively subspaces of W and 3 If we use the transpose AT instead then we get two other spaces Col AT which we call Row A or the row space of A and also Nul AT or sometimes called the left null space of A Note that Row A is a subspace of 3 and consists of rows of A trans posed into column vectors Similarly Nul AT is a subspace of W consisting of all column vectors X such that ATX 0 Taking transpose we see that these correspond to row vectors XT such that XTA 0 Hence the name of left null space77 The concept of inner product gives another meaning to these Thus the left null space Nul AT can be thought of all vectors orthogonal to all vectors of Col A In general we de ne an orthogonal subspace to a given space W as 1 l lt 1110 gt 0 for all w E We denote this as WL It is not hard to see that WLL W for any subspace W Thus we note that Col A Nul ATL This expresses the starting space Col A as a null space of some other matrix This was the basis of our results on writing a column space as a null space or conversely7 writing a null space as a column space Similarly7 we can describe Row A as Nul Ai It is easy to see that for any subspace W of V we have dimW dimWL dimV This is another formulation of the fundamental dimension theorem Proof Write W Col A for some m gtlt 71 matrix A7 so that W is a subspace of Wquot We know that dimW rankA Then WiY ltwYgt0foralleW Since lt wY gt wTY7 we see that WL Nul AT and we know that its dimension is m 7 rankAT m 7 rankA Thus7 we have proved that dimW dimWL rankA m 7 rankA m Orthonorrnal Bases Suppose that we have a vector space V with an inner product and a given subspace W The above results make it clear that we would greatly bene t if given any basis or even a spanning set of the subspace W7 we can nd a suitable orthogonal or even orthonormal basis for W from the given set This can be accomplished by a slight modi cation of our row reduction algorithm This is a way of codifying the Gram Schmidt process discussed in the book We show the method below7 which is not in the book lP matrix Suppose that 111 11 is a spanning set for W First step is to make a matrix M such that Mij lt mu gt for all 2397j 17 71 Note that M is a symmetric r gtlt 7 matrix and we can think of M as lt BB gt where B is the row of vectors 01 112 UT This is said to be the lP lnner Product matrix of the spanning set B If we replace B by linear combinations of 111 11 then we can think of the new set of vectors as BP where P is the matrix describing the combinations If P is invertible7 then vectors of BP form a new spanning set for the same space W and its lP matrix is PTMP We shall show that there is an invertible matrix R such that RTMR is a diagonal matrix It follows that the new generating set BR consists of orthogonal vectors If the original vectors of B were independent7 then the new vectors BR will indeed be an orthogonal basis Moreover7 in this case7 the matrix R can be chosen to be unit upper triangular This is known as the Gram Schmidt theorem The Algorithm Example 1 Start with the 113 matrix M of a spanning set 111 1 for W Let I be the usual identity matrix Set A M117 the augmented matrix as usual Perform the usual row reductions on M to try and convert it to REF However every time you do a row transformation immedi ately follow it with a corresponding column transformation Here is an example of three vectors 111112113 whose 113 matrix is 211 M151 112 Note that we did not mention which vector space this is7 and the point is that we need not know it We just need the 113 matrix for the given spaning set The same matrix augmented with ldentity matrix is 2 1 1 1 0 0 1 5 1 0 1 0 1 1 2 0 0 1 A We make the row transformation R2 7 R1 and immediately follow it with C2 7 C1 to get 2 0 1 1 0 0 0 92 12 712 1 0 1 12 2 0 0 1 Now we do R3 7 R1 followed by C3 7 C1 to get 2 0 0 1 0 0 0 92 12 712 1 0 0 12 32 712 0 1 One more pair of steps R3 7 R2 followed by C3 7 C2 gives 200 1 0 0 030712 1 0 0 0 749 7191 The rst 3 gtlt 3 part is now a diagonal matrix and the second 3 gtlt 3 part is recording the change of basis Precisely7 the second part is RT so the new orthogonal basis is given by 1 712 749 LU1 LU2 W311 12 13 0 1 0 0 1 We are thus claiming that wl 111102 vzi vl and wg ygi UzigUl gives a new orthogonal basis for W Spanvlvgvg If we need this to be orthonormal7 we may further divide each vector by its length The lengths are visible in the nal matrix as 27 and 4273 respectively It is instructive to check that this matches the Gram Schmidt process Here is another example where the starting vectors are not independent Example 2 Now let the starting vectors 0102113 have the following lP matrix 213 M156 369 As before7 we make the augmented matrix A 2 1 3 1 0 0 A 1 5 6 0 1 0 3 6 9 0 0 1 Perform the following operations and verify the results 0 Perform R2 7 131 and OZ 7 01 to get 2 0 3 1 0 0 0 92 92 712 1 0 3 92 9 0 0 1 0 We give the next two steps without mentioning the operations Figure out the operations 2 0 0 1 0 0 0 92 92 712 1 0 0 92 92 732 0 1 H D 00 4 CT CT I H D 00 7 CT 2 0 0 1 0 0 0 92 0 712 1 0 0 0 0 71 71 1 0 Note that the third new vector has norm zero and hence it is zero the process indicates that the third vector is LUg 113 7112 7111 0 and thus it identi es the linear dependence relation tool 0 We can now conclude that our vector space Spanvlvgvg is actually two dimensional with wl 01 and LU2 112 7 111 as an orthogonal basis The lengths of uth are 27 g respectively Summary of topics to study Calculations of inner products7 lengths7 angles7 unit vectors in given directions Learn to work with any given inner productmaterial from 67 Orthogonal projection into a subspace Using the normal equations Do this when the subspace has an orthogonal basis Checking for orthogonal or orthonormal vectors Gram Schmidt process as described above Future work Projection into a subspace whose basis may not be orthogonal Future work Various tting techniques Summary from Chapter 5 Review de nitions of eigenvalues7 eigenvectors7 eigenspaces Learn how to ef ciently calculate the characteristic polynomials Given bases for eigenspaces of a matrix A7 put together a basis for the whole vector space in which the multiplication by A is given by a diagonal matrix know the condition when this is possible Diagonalize a given matrix7 when possible Learn a standard form of a 2 gtlt 2 matrix when it has complex eigenvalues
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'