Abstract Linear Algebra
Abstract Linear Algebra MATH 4373
Popular in Course
Popular in Mathematics (M)
This 57 page Class Notes was uploaded by Mason Larson DDS on Monday October 26, 2015. The Class Notes belongs to MATH 4373 at University of Oklahoma taught by Staff in Fall. Since its upload, it has received 63 views. For similar materials see /class/229295/math-4373-university-of-oklahoma in Mathematics (M) at University of Oklahoma.
Reviews for Abstract Linear Algebra
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/26/15
Abstract Linear Algebra Math 4373 Fall 2000 Noel Brady August7 2000 Contents O CO 4 Basic Theory 11 Vector Spaces 12 Linear Transformations and Coordinates 13 Linear functionals and duality 14 Determinants Structure of Linear Operators Eigenvalues and Eigenvectors7 Diagonalization 22 Annihilating Polynomials7 Hamilton Cayley Theorem7 Primary Decomposition The orem 23 Jordan and Rational Canonical Forms Inner Product Spaces 31 lnner Product Spaces 32 Diagonalization and Spectral Theorem Miscellaneous Topics 41 Introduction to Linear Groups and Geometry 2 3 11 17 20 Chapter 1 Basic Theory This chapter gives a review of the basic notions of vector space and linear transformation that you have encountered in Math 3333 There is a somewhat more abstract general perspective this time around however We work with vector spaces over an arbitrary eld rather than just the real or complex number eld Most of the topics should seem familiar to you if you recall your Math 3333 course notes Topics include vector spaces subspaces direct sums bases dimensions coordinates linear transforma tions matrices change of bases and similar matrices rank nullity linear functionals dual spaces transposes and adjoints determinants and matrix inverses 11 Vector Spaces De nition 111 A eld is a set K together with two operations called multiplication denoted by juxtaposition and addition denoted by which satisfy the following axioms 1 F03 a 00 T 9 zyymforallzy K myzxyzforallxyz K There exists a unique zero element 0 C K such that 0 z z for all z E K For every x C K there exists a negatiue 7m C K such that z 7m 0 my yz for all my C K for all 73472 6 K There is a unique unit element 1 C K 0 such that 1x z for all z E K For every x C K 0 there is an inuerse m 1 C K such that mz l 1 myzmyxz forallx7y726K Examples 112 Examples of elds include Q7 R C Q 7 217 p prime and the eld of con structible numbers We have some inclusions Q C C constructible numbers C R C C The classical question of duplicating the cube becomes the question of whether or not 55 belongs to the eld of constructible numbers Non examples include Z Zn n not prime7 N H Remark 113 Which eld has the most elements Q or R To answer this question we must think about the cardinality of sets Motivation early counting methods Say that sets X and Y have the same cardinality if there exists a bijection X a Y Denote cardinality of X by CardX Say that CardX j CardY if there is an injective map X a Y Schroeder Bernstein Theorem lf CardX j CardY and CardY j CardX7 then CardX CardY Dynamical systems style proof Diagonal counting techniques Z N x N and Q all have same cardinality as N R has same cardinality as 07 1 and so has strictly larger cardinality than Q The power set7 2X7 has strictly greater cardinality than the original set X pretty Cantor argument by contradiction Show that 07 1 and 2N have the same cardinality dyadic expansion of reals o R has the same cardinality as R x R Via 2N and dyadic expansion of reals and hence as C De nition 114 Let K be a eld A K iector space is a set V together with an operation VXV gt V U7w gt gt U LU called vector addition such that V7 is an abelian group7 and an operation KXVgtVkUgt gtkv called scalar multiplication satisfying 1 kuv kukv for all k EKandall aw EV 2 klukulu for all kl EKand allu EV 3 klu klu for all kl E K and all u E V 4 1a u for all u E V where 1 E K is the unit element Elements of V are called vectors7 and elements of K are called scalars Examples 115 Some examples of vector spaces Throughout these examples K is any eld 1 R3 from Calc Ill 2 More generally the space of n tuples K with addition de ned by k1 7kn l1 7ln k1l1 7knln and scalar multiplication de ned by lk1k lk17lk is a K Vector space 03 KWX under usual addition and scalar multiplication of m x n matrices F Let S be any set Then the set KS flWSHKisafunction with addition de ned for g 6 KS by f ggtltsgt as gltsgt for an s e s and scalar multiplication de ned for f 6 KS and k E K by kfs ms for all s e S U The set of continuous functions Ca7 b on the closed interval a7 1 C R under usual de nition of addition and scalar multiplication of functions 6 the set of polynomials with coef cients in K7 under usual addition and scalar multipli cation of polynomials 7 If K and L are elds and K C L then L may be thought of as K Vector space De nition 116 Let 12711 un be elements of a K Vector space Say that v is a linear combi nation of 741 un if there are scalars k1 kn C K such that 7L U k1U139ann i1 Examples 117 Linear combination coef cients are fairly easy to work out 1 Show that 17 0 is a linear combination of 11 and 21 in R2 2 Show that every function R a R is a linear combination of an odd function and an even function De nition 118 Let V be a K Vector space A subset U C V is called a K uector subspace of V if 1 U is nonempty 2 U is closed under addition U17 LL2 C U implies ul U2 6 U 3 U is closed under scalar multiplication a C U and k C K implies kn C U Note that U is itself a K Vector space Examples 119 There are many naturally occurring examples of subspaces 1 Let a E R3 then is a subspace of R3 2 Own c c C1RC CR c RR 03 solutions to A23 0 C C R 4 The set of all odd resp even functions R a R 5 Let 741 un C V where V is a K Vector space Then n 5U17 7 Zkiuilki E K i1 is called the subspace of V spanned or generated by 741 14 It is just the subspace of all linear combinations of 741 774 6 The collection of all symmetric n x n matrices resp skew symmetric matrices is a subspace of Km 1 The collections of all self adjoint hermitian matrices is a real subspace of Cm but is not a complex subspace 00 The solution set of the homogenous system Am 0 where A is an m x n matrix7 z is an n x 1 vector and 0 is an m x 1 vector to The intersection of a family of subspaces of V is again a subspace of V 10 The space of polynomials of degree at most n is a subspace of 11 If W17 W2 are subspaces of the vector space V7 then their sum7 de ned by W1 W2 w1w2lwi EWi i172 is a subspace of V De nition 1110 Let V be a K vector space The elements u17 7a 6 V are said to be linearly independent if k1U1knun 0 implies k1 07 7kn0 We say that u17 7 an are linearly dependent if they are not linearly independent Equivalently7 we can say explicitly what it means for the collection of vectors u17 7 an to be linearly dependent Namely7 there exists scalars 17 7 km not all zero7 such that V L Z 0 i1 Examples 1111 Here are some linearly independent collections Can you say which is which 1 171 and 271 2 lt171gt7 170 and 271 3 cos4z and sin4z in C R 4 A non zero odd function and a non zero even function in RR De nition 1112 Say that the set u17 7 Lln generates the vector space V if 5u17 71in V De nition 1113 We say that the set u17 7 Lln is a basis for the vector space V if 1 M7 7a are linearly independent 2 M7 71in generate V Examples 1114 Find simple bases for the following spaces 1 K 239 Knxn OJ Solutions to the equation Azy 0 where gt 0 is real F The space of polynomials of degree at most n over K 01 The space of symmetric n x n matrices a The space of skew symmetric n x n matrices Lemma 1115 Let 7117 un be a basis for the K vector space V and let a E V Then there exist uniquely determined scalars 0417 704 E K such that u a1u1anun Proof There are two claims existence and uniqueness Their proofs involve different properties of a basis Existence of the al follows form the fact that the vi generate V and a E V Now for uniqueness Suppose that there are scalars E K such that 041141 l O nun Blul l Then we get 0 1 7 Bl l 1 quot 1 an 7 0 and so7 by linear independence of the vi we conclude that al 7 Bi 0 for all i That is al 5139 for all i7 and so uniqueness is established D De nition 1116 Let v17 7an be a basis for the vector space V7 and let a E V The n tuple of scalars 0417 70 with the property that ELI Oli Lll39 a is called the coordinate n tvple of the vector u with respect to the basis 7117 un We call the 04 coordinates of a wrt the basis 1 17 Nun Lemma 1117 Let v17 7an be a basis for the K vector space V Then the coordinate map V a K a H 041704n where the 04 are coordinates ofu wrt the basis n17 un is an isomorphism of vector spaces bijective and respects addition and scalar multiplication Proof Exercise D Theorem 1118 Let V be a K vector space If 7117 un is a linearly independent set of vectors in V and if v17 7vm generates V then nltm In other words the cardinality of a linearly independent set is less than or equal to the cardinality of a generating set Proof We know that ul 7 0 since it is part of a linearly independent set verify thisl Now 121 vm generates V implies that m a1v1amvm for some scalars 04 Since ul 7 0 we know that not all the 04 are zero Suppose by reordering the 127 if necessary that 041 7 0 Then we can solve for 121 to get 1 042 04m Ul LL1 U2i Um 041 041 041 But this means that u1v2 vm generates V Again uz E V and u1v2 vm generates V implies that there are scalars Bl 75m 6 K such that U2 31U132U2 mvm Note that g m cannot all be zero since the set u1u2 is linearly independent We may assume by relabeling if necessary that 32 7 0 As before we can solve for v2 and conclude that u1u2v3 vm generates V Provide detailsl Note that if n gt m the argument is by contradiction here then we can proceed as above prove the inductive step to replace all the 12139s by Ms and get that uh um generates V But then we would have under the assumption that n gt m scalars 39yl gym such that um1 V1141 39mum But this contradicts the linear independence of ul un D Corollary 1119 Let u1 un and 111 Um be two bases for a vector space V Then in 71 Proof By de nition of a basis we have that u1 un is linearly independent and that 121 vm generates V Thus Theorem 1118 implies that n g m However we also know that 121 vm is linearly independent and that uh un gener ates V Now Theorem 1118 gives us in g n Combining the two inequalities we get n m D De nition 1120 A vector space is said to be nite dimensionalif it has a nite basis Otherwise it is said to be in nite dimensional If the K vector space V is nite dimensional then the number of elements in any basis of V is called the dimension of V and is denoted by dimKV Examples 1121 Say which of the following vector spaces are nite dimensional Compute the dimension in the cases where it is nite 1 R as an R vector space 2 as a Q vector space 3 C as an R vector space 4 R as a Q vector space U C as a C vector space a as a K vector space Set of polynomials of degree at most n as a K vector space 00 1 KWX as a K vector space to The space of n x n symmetric real matrices as an R vector space 10 The space of n x n skew symmetric real matrices as an R vector space 11 C01 as an R vector space 12 The space of solutions to the equation 1 A23 0 where gt 0 is real7 as an R vector space 13 The space of solutions to the homogenous system remember Math 3333 Amxnmnxl Omxl We have seen in 1157 that if K C L are elds7 then L is a K vector space Suppose that K C L C M are elds7 and that L is a nite dimensional K vector space7 and M is a nite dimensional L vector space Prove that M is a nite dimensional K vector space7 and that dimKM dimKLdimLM H q Remark 1122 The eld of constructible numbers has in nite dimension over Q However7 the rational vector subspace ofR which is spanned by a nite set of constructible numbers7 0417 7om7 has nite dimension over Q It can be shown see a standard text on Abstract Algebra that alimQ0417 70 is always a power of 2 It can also be shown that lands2 3 Now7 if 52 is constructible7 it belongs to 511 704 for some set 0417 704 of constructible numbers Therefore 112114 implies that 3 should divide a power of 2 which is impossible Thus 52 is not constructible This is one neat application of vector spaces in the resolution of a 2000 year old problem from geometry the problem of duplicating the cube Next result says that at least in nite dimensional case we can always complete any linearly independent set to a basis7 and we can always trim away at77 any generating set to obtain a basis Proposition 1123 Let V be a nite dimensional vector space Let Z be a generating set for V and let X be a linearly independent subset of Z Then there exists a basis Y for V such that XCYCZ Proof Let Y be a maximal linearly independent subset of Z which contains X Complete the proof Hint7 to show generation7 it suf ces to show that Y generates all the elements of Z Given 2 E Z Y then 2317 ym is linearly dependent why7 and so there exist scalars A0 x xm such that A02A1y1kmym 0 Moreover 0 7 0 by linear independence of Y why71 Now nish the proof D Proposition 1124 Let U be a nite dimensional K uector space and let V C U be a subspace Then V is nite dimensional and dimKV dimKU Proof Exercise D De nition 1125 Let U and V be K Vector spaces The external direct sum of U and V is denoted by U 63 V7 and is de ned to be the set U x V together with coordinate wise addition and scalar multiplication 141111 142112 U1 142111 U2 and kuu kU7k U Now suppose that U and V are subspaces of the K Vector space W We say that W is the internal direct sum of U and V if the following properties hold 0 U O V 0 0 U V W Examples 1126 Here are some natural examples of direct sums 1 R2 R 69R 2 RR Even 63 Odd Think about the polynomial version of this 3 Kmm Symm 69 Skew Proposition 1127 IfU and V are nite dimensional K uector spaces and ifW UEBV then W is also nite dimensional and dimKW dimKW dimKV Proof Exercise D Proposition 1128 Let U and V be subspaces of the K uector space W Then the following are equiualent 1 W is the internal direct sum of U and V 2 Every element w E W can be written in a unique way as a sum w u u where u E U and u E V In this case W is isomorphic to the external direct sum of U and V Conuersely if a uector space W is isomorphic to the external direct sum U EB V of uector spaces U and V then W can be decomposed as the internal direct sum of subspaces U and V with U isomorphic to U and V isomorphic to V Proof Exercise D Remark 1129 Because of the isomorphism above7 we denote any direct sum internal or exter nal of U and V by U 69V 12 Linear Transformations and Coordinates De nition 121 Let U and V be vector spaces over the eld K A linear transformation or linear operator from U to V is a map T U H V such that Tu1 U2 Tu1 Tu2 for all U17 Lt2 E U and EX 1 2 gt93 1 HHH Tu Tu for all a E U and all scalars E K amples 122 Some examples of linear operators IUUHU uHuforalluEU OUHV uHOforalluEU kIUUHU quuforalluEU HerekEK Differentiation of polynomials7 D H Amm K H KmXnX1 H AX T Km H KWX A H PmeAQan for given P E Kmxm and Q 6 Km Int CR H C1R f H Intf Where Intltfgtltzgt mmdt The coordinate map V H K which takes a vector to its coordinates Wit respect to a basis n17 7an for V Rotations in R2 to Re ections in R starting with n 2 1 Projections in R 2 The transpose map Km H Knxm Properties 123 Here are some elementary properties which are satis ed by linear transforma tions 1 T U H V satis es THu HTu for all a E U 2 T U H V satis es T0 0 3 T preserves collinearity 4 T preserves parallelograms 5 T R H R is linear if and only if its graph is a straight line through the origin 11 6 TELMM 221 WTW 7 S T U a V linear generates U and for all 2 implies that S T 8 Let L1 be a basis for U and let wi E V then there exists a unique linear mapping T U a V such that wi for all 1g i g 71 De nition 124 Let T U a V be a linear mapping Then KerT a E UlTu 0 and ImT l a E U are vector subspaces of U and V respectively prove this We de ne RankT dimKImT and NullityT dimKKerT Properties 125 Note that prove it the linear map T is injective if and only if KerT 0 Examples 126 We ve seen many examples in Math 3333 1 KerAan is the solution set to the homogenous system AerLXerl Omxl 2 General solution to linear system Aanqu Bmxl consists of a particular solution to Aananl Bmxl plus the general solution to the homogenous system Aananl 0m 3 Projection map of R3 onto R2 Theorem 127 LetT U gt V be a linear map and suppose that uh uk and Tw1 Twl are bases for KerT and ImT respectively Then u1 ukw1 wl is a basis for U In particular dimKU NullityT RankT Proof Exercise There are two things to prove about the set of vectors 741 ukw1 wl it is a linearly independent set and it generates U Linear independence Suppose 041U104Wk31w1 Blwl 0 We have to prove that 041 0 Ozk 031 0 3 0 First apply T to both sides of the equation above Does this simplify at all Why What does the resulting equation tell you about the 3 Why Now plug this information about the 3 back into the original equation above What do you get now What can you conclude about the 04 Why 12 Generating set Given a E U you have to nd scalars oz and j so that U 041U1akuk iw1 iwi First look at What do you know about Tu and the Twj7 What does this tell you about a and a linear combination of the wj careful herel How do the u come in to play here conclude the proof D Corollary 128 Suppose thatT U gt V is a linear map of nite dimensional vector spaces such that dimKU dimKV Then T is injectiue if and only ifT is surjectiue Proof Use result above7 together with 125 B De nition 129 Let U and V be K vector spaces with bases 3 1 1417 7147 and 3 2 1217 7um respectively Let 1A1 U a K and 1amp2 V a K be the corresponding coordinate isomorphisms Suppose that T U a V is a linear map We have seen in 1238 that T is uniquely determined by the vectors L1 De ne a matrix AT 6 KWX by setting 2Tuj to be its j th column for each 1 g j g n In other words7 de ne aij is the i th coordinate wrt 32 of the vector Tuj We can visualize this in terms of matrices as follows Tujl52 l 0411 041i 041n At 04ml O mj O mn Note that there is a commutative diagram T U V 111 112 K K AT That is7 2TU AT111u for all u e U We call AT the matrix of the linear map T with respect to the bases 31 for U and 32 for V 13 Examples 1210 Compute the matrices of the following linear maps with respect to the given bases or verify the claims that are made as appropriate Keep in mind that the 11 isomorphisms take us from the realm of abstract vector spaces into the concrete coordinate world of K spaces 1 OJ F a T D is the derivative linear map from the space U of polynomials of degree at most 71 into itself and 31 32 1zm2 m U V R T is the identity map 31 3 121 vn and 32 is the standard basis 51 en This matrix is called the change of basis matrix from the basis 3 to the standard basis We shall denote it by P5 The matrix which changes from the standard basis to 3 is just Pgl Let 31 and 32 be two bases for R The change of basis matrix from 31 to 32 is given by the matrix 71 P pg2 1381 Let T U a U be a linear transformation on a nite dimensional vector space U Suppose that T has matrix A with respect to the basis 3 for U and let 3 be another basis for U Then the matrix for T with respect to the basis 3 is given by PAP l where is the change of basis matrix from 3 to 3 T R a R is the linear map which is de ned as a permutation of the standard basis of R and extended by linearity Basis for R is standard Do an explicit computation for the symmetric group 53 of all permutations of a set of three elements which acts by linear transformations on R3 See that these give isometries of R3 which preserve the 2 simplex with vertices at 100 010 and 00 1 Generalize this to higher dimensions Let U V and W be K vector spaces with chosen bases 31 32 and 33 respectively Let T E U V and S E V W have matrices A and B respectively with respect to the given bases Then ST has matrix AB with respect to the bases 31 and 33 Let R9 denote the linear transformation of R2 which consists of a standard rotation of 9 radians about the origin Here standard means counterclockwise for positive 0 Let L9 denote the linear transformation of R2 which consists of a re ection in the line 9 which contains the origin and makes an angle of 9 radians in standard position that is m axis is initial edge and 9 is terminal edge and positive angles are measured in the usual counterclockwise direction We have cos 9 7 sin 9 L 7 cos 20 sin 20 sin 9 cos 9 9 7 sin 20 7 cos 20 Verify by matrix multiplication that R9R R9 and that L9L 1326 lnterpret these results geometrically R9 Compute and interpret the following L9L9 L L9 L9R and R L9 14 8 Realize the group of isometries of the Euclidean plane as a group of linear transformations of R3 to itself What is the special form of these linear transformations Geometric interpreta tions De nition 1211 Let U and V be K Vector spaces We denote by U7 V the set of all linear mappings U a V7 and by U the set of all linear mappings U a U Theorem 1212 Let U and V be K uector spaces Then we have 1 U7 V is a K uector space with operations de ned by S TWE 7 595 T907 and kTWE MTW for all ST E U7V m E U and allk E K N Let W be a K uector space Composition of maps giues a multiplication V7 W X U7 V a U7 W S7 T gt gt ST where STu STu for all u E U This multiplication satis es a RST RST b RS T RS RT c RST RTST d kST MST SkT provided each is well de ned Proof Easyl Exercise D Corollary 1213 Let U be a K uector space Then U is 1 a K uector space 2 a ring under 5 T and ST 3 kST MST skT for all k E K and for all S T E U De nition 1214 Let K be a eld of scalars A set X on which there is an addition7 a multipli cation7 and a scalar multiplication all satisfying 173 of the corollary above is called a K algebra Proposition 1215 Let U and V be K uector spaces of dimension n and m respectively Then U7 V is isomorphic to KWX as K uector spaces Furthermore U is isomorphic to Kmm as K algebras Proof Exercise D Corollary 1216 Let U and V be K ueotor spaces of dimension n and m respectively Then dimK UV mn and dimK U n2 Proof Exercise D Remark 1217 The Einstein summation convention requires one to sum over any repeated upper and lower indices in an expression involV1ng tensors For example the expression 21 041127 becomes the simpler expression agvj We shall not make too much use of this tensor notation upper and lower indices in this course Nevertheless7 it s good to be aware of it 13 Linear functionals and duality De nition 131 Let U be a K vector space A linear functional on U is a linear transformation U a K The vector space U K of all linear functionals on U is called the dual space of U and is denoted by U Examples 132 Examples of linear functionals include 1 Let V1 WV 6 K Then KnaK k1 mT gt gt39y139ynh1knT is a linear functional on K The de nite integral 1 f ftdt is a linear functional on Ca to The trace of a matrix de nes a linear functional on Kn OJ Remark 133 By Corollary 1216 if U is a nite dimensional K vector space then dimKU dimK UK dimKUdimKK dimKU so that U and U are both isomorphic to a given K and hence to each other However this is not a natural isommphism since it depends on a choice of bases for U and U De nition 134 Bracket notation for linear functional 1 acting on vector u f u De nition 135 Kronecker delta is de ned for ij E 1 n 6 7 1 ifi j 7 T 0 ifi 7 j Proposition 136 Let U be a K uector space with basis u1 un De ne linear functionals f1 fn onU by Wu Ujgt 5n Then f1 fn is a basis for U which we call the dual basis to u1 un Proof Exercise D Remark 137 If V has basis 121 um and f1 fm is a dual basis then m m m 0 20m Zc fzn mjgt Zc n Ci 11 j1 11 so that fi picks out the i th coordinate with respect to the basis 121 un ofa vector u E V If T U a V is a linear transformation and uh un is a basis for U then the matrix A aij of T with respect to these two bases is given by an ltfi7 TWgt 17 The bracket notation makes it clear that U should act as a dual to U This is content of next proposition Proposition 138 If U is s nite dimensional K vector space then U is naturally isomorphic to U Proof Exercise Given v 6 U7 let Lu U a K be the evaluation map which sends a linear functional 1 E U to the scalar fv E K All you have to do is verify the following 0 Lu is a linear functional on U o The map v gt gt Lu is linear o The map v H Lu is bijective D De nition 139 Let S be a subset of a nite dimensional vector space V The annihilator of S is denoted by Squot and is de ned as follows 5 f V lfv0forallv S Properties 1310 The following should be intuitive properties Give proofs of them all 1 Squot is a subspace of the dual space V 2 lfS 07 then Squot V 3 lfS V7 then Squot 0 C V De nition 1311 Let V be an n dimensional vector space A hyperspace is a subspace of V which has dimension n 7 1 Remark 1312 Hyperspaces in the nite dimensional vector space V are precisely the kernels of linear functionals on V Lemma 1313 Let W be a subspace of the nite dimensional vector space V then dimW dimW dimV Proof Exercise Let v17 7vk be a basis for W Complete it to a basis v17 vk 7vn for V Let f17 7fn be the corresponding dual basis for V Prove that fk1 7fn is a basis for Wquot D Corollary 1314 Let V be an n dimensional vector space Then each k dimensional subspace here k g n of V is the intersection of n 7 k hyperspaces of V Corollary 1315 If W1 and W2 are subspaces of a nite dimensional vector space V Then W1 W2 if and only if Wf De nition 1316 Let T U a V be a linear transformation of nite dimensional K Vector spaces Then some earlier proposition in lt section ensures that there is a unique linear trans formation T V a U satisfying ltT fugtltfTvgt forallvEUandallf V T is called the adjoint of the operator T Remark 1317 Note that for f E V we have just de ned T f to be the composition T m foT The adjoint construction gives rise to a homomorphism verify this Ad U7 V a VU T H AdT T Proposition 1318 Let T U gt V above have matrix A with respect to bases 31 for U and 32 for V Then T has matrix AT transpose with respect to the respective dual bases for V and U5 Proof Exercise D Proposition 1319 Let T U gt V be a linear transformation of the vector spaces U and V Then the kernel of the adjoint T ofT is the annihilator of the image of T In particular if U and V are nite dimensional we have 1 rankT rankT 2 image of T is the annihilator of the kernel of T Corollary 1320 For n x n matrices A we have rankA rowrankA columnranMA 14 Determinants De nition 141 Let A E M2X2K Then the determinant ofA is denoted by detA or by lAl and is an element of the eld K de ned by d6tltAgt 011022 7 012021 Remark 142 For A E M2X2K we have A is invertible if and only if lAl 7 O and in such a case we have 1 A71 122 a12 lAl lt 70121 an De nition 143 Here s an inductive de nition of determinants of n x n matrices Throughout this de nition we shall assume that A E MnmU for some eld K o For ij E 1 n let A denote the element of Mn1xn1K which is obtained by deleting the i th row and the j th column from A o If n 1 we de ne lAl an otherwise we de ne lAl inductively by cofactor expansion along its rst row as follows lAl Z1i1a1ilgul 1 o The terms 71i7lAijl are called the ij cofactors of the matrix A They shall appear below in a more general formula for expansion of determinants by any row or column Examples 144 Here are some determinants 1 detL9 71 2 detR9 1 3 determinant of upper lower triangular matrix 1 m1 m zillil 1 mg mg zgil 4 Vandermonde determinant det Hiltjm 7 xi 1 mn x2 271 Properties 145 Next we list some properties of determinants which can be deduced from the de nition above You should verify that these properties also hold if we use instead the de nition of the determinant by cofactor expansion along the i th row Namely 16724 Z ly anlgnl 7 i so that lAl or detA above is actually det1A In time we shall deduce that all these expansions give the same result and that you can expand by columns too 20 H The determinant is a linear function of each column when the other columns are kept xed That is letting Al denote the i th column of A and A denote a column vector and k k E K we have detA1 kAi39 HM A kdetA1AiA kdetA1 A A to If A has two adjacent columns which are equal then lAl 0 OJ Let In denote the n x n identity matrix Then lInl 1 F If adjacent columns of A are interchanged then lAl changes sign 01 If two columns of A are equal then lAl O a If one adds a scalar multiple of one column of A to another then lAl does not change Remark 146 There are two neat things to note here 1 Properties 4 5 and 6 as well as any deductions from them below all follow from properties 173 So any function of n column vectors which satis es 173 will have to be the determinant function from the uniqueness result in 1413 below 2 Property 6 is the starting point for speedy computations of determinants recall the hateful exercises in Math 3333 The following theorem is a classical tool used in solVing systems of linear equations called Cramer s Rule Theorem 147 Let A1 A B E R be column vectors and suppose that detAgt A 7 0 Then we can solue the linear system z1A1mnAn B as follows detA1 B A detA1 Aj A Theorem 148 Let A1 A E R be column uectors IfdetA1 A 7 0 then A1 A is a linearly independent set j Corollary 149 If the column vectors A1 A E R satisfy detA1 A 7 0 then the linear system z1A1mnAn B has a solution which can be found by Cramer s rule for any column vector B E R De nition 1410 A permutation of the set Jn 1 n is just a bijection of this set to itself A transposition is a permutation which just interchanges two elements of the set Cyclic notation for permutations The symmetric group Sn Lemma 1411 A permutation of Jn is a product of transpositions 21 Proposition 1412 To each permutation o 6 5 we can associate a number 6039 6 i1 such that 1 6739 71 for any transposition r 2 603910392 E0391E0392 for all permutations 03917 02 6 Sn In particular ifo can be expressed as a product of transpositions a r1 rm then m is odd euen according as 6039 71 or 1 This next result establishes the uniqueness of determinants It will be useful in proving some fundamental results about determinants such as the fact that the determinant of a transpose is the same as the determinant of the original matrix which in turn leads to the familiar row or column cofactor expansion formula It is also used to establish a geometric interpretation of determinants in terms of areas and volumes Theorem 1413 Let U17 Un E K be column uectors and let new uectors A17 A E K be de ned by A7 041le anjUn for scalars 041739 E K Then detA1 7A Z E039Ola1104522 045nndetU1 7U 765 In particular the determinant of a matrix A aij E Kmm is giuen by the formula 157414 Z 60aa11a522aen 765 There is another way of stating this We say that a function of n vector variables which are each n dimensional column vectors is n linear if it is linear in each variable keeping other variables xed Way that the function is alternating if its output is zero whenever two input variables are equal and if its output changes sign whenever two input variables are interchanged The theorem becomes let D be an alternating n linear function whose ualue on the input e17 7en is 1 then D det Theorem 1414 Let A be an n X n matrix and let AT denote its transpose Then detA detAT Corollary 1415 The determinant of the nx n matrix A can be eualuated by cofactor expansion uia any row or any column That is 157414 Z1ijaul iil Z1ijaul iil i1 391 Theorem 1416 Let A and B be n X n matrices Then detAB detAdetB 22 Corollary 1417 Let A be an invertible n X n matrix Then 1 detA 1 m A 1 HWMUDT Remark 1418 Mention volumes functions in R and their relationship with determinants Chapter 2 Structure of Linear Operators In the next few sections we shall develop a structure theory for linear transformations on a nite dimensional vector space The basic problem that we are faced with is this given a linear trans formation T on a nite dimensional K vector space V choose a basis for V with respect to which T is very easy to understand For example the matrix of T with respect to our basis has a very simple form What is the simplest form we should hope for Well diagonal matrices are very easy to work with So we start off in section 21 by discussing eigenvectors and diagonalization You may recall from Math 3333 that the basic goal is to nd a basis for V composed of eigenvectors of T There are a number of problems that may occur with our attempts at diagonalization For instance we may not be able to solve the characteristic equation for T over the eld K so we cant nd any eigenvalues for T in Or we may nd that the dimensions of the eigenspaces do not sum up to give the dimension of V so we cant nd a basis for V consisting of eigenvectors of T We are led to consider more general T invariant subspaces and eventually to the primary decomposition theorem section 22 and the Jordan section 23 and rational section 23 forms This theory involves a beautiful interplay between T invariant subspaces and polynomial combinations of the operator T 21 Eigenvalues and Eigenvectors Diagonalization De nition 211 0 Let T V 7 V be a linear operator of a K vector space V to itself An element A E K is called an eigenvalue of T if there exists a nonzero vector v E V such that T U U 0 Suppose A E K is an eigenvalue of the linear operator T Then the collection U E V T U U is a subspace of V called the A eigenspace of T lt s elements are called A eigenvectors or just eigenvectors if the context is clear of T 0 Note that A E K is an eigenvalue of T if and only if T7 AI is singular has nontrivial kernel and this is true if and only if detT 7 AI 0 In this case KerT 7 AI is precisely the A eigenspace of T 2 1 1 1 There is a basis for R2 consisting of eigenvectors of the linear operator T given by matrix Examples 212 1 multiplication by We can use this to transform this matrix into a diagonal matrix 2 1 1 1 This can be used to compute high powers of our matrix See Fibonacci sequence applications in class cos 9 7 sin 9 sin 9 cos 9 This matrix has eigenvectors and eigenvalues only when 9 is a multiple of 7139 In these cases the original matrix is already diagonal and the eigenvalues are clearly i1 In all other cases the matrix does not have any eigenvalues over R However when viewed as a 2 by 2 complex matrix this has eigenvalues eile See earlier homework exercise 1 1 0 1 This matrix has eigenvalue equal to 1 but R2 does not have a basis of eigenvectors De nition 213 The characteristic polynomial of the n by n matrix A is de ned to be the fol lowing polynomial in A detA 7 AI Note that the eigenvalues of the linear operator of K given by matrix multiplication by A are precisely the roots of this characteristic polynomial Lemma 214 Similar matrices have the same characteristic polynomials 25 Corollary 215 A linear transformation T V 7gt V of a nite dimensional vector space has a well de ned characteristic polynomial The eigenvalues of T are the roots of this polynomial De nition 216 A linear transformation T V 7 V is said to be diagonalizable if there exists a basis for V comprised entirely of eigenvectors of T Lemma 217 Non zero eigenvectors of a linear operator T which correspond to distinct eigen values ofT are linearly independent Proof Suppose vi is a non zero eigenvector of the linear operator T with corresponding eigenvalue i for 1 g i m7 and suppose that the i are all distinct We have to prove that 2 am 0 implies ai 0 for all i i39 We do this by induction on m This is clearly true for m 1 since we re considering non zero eigenvectors Applying T to Oli Ul39 0 gives 2 aiVvi 0 139 since Tvi Aivi for all i On the other hand7 multiplying aivi 0 across by j gives ZaiAj Ui i Subtracting these two equations gives Zoglkj 7 My 0 139 Note that this sum has really got m 7 1 terms the term i j vanishes7 and so the inductive hypothesis tells us that the vi 7 j are already linearly independent Thus Oll39j 7 Ai 0 for each i E 17 7m Since the i are all distinct7 we conclude that og 0 for all i 7 j Putting this back into the original equation gives 0 Ozj Uj 0 0 and vi 7 0 gives us that the remaining 04739 must be 0 Done D Theorem 218 V a nite dimensional vector space T V 7gt V a linear transformation 17 x xm the set of all distinct eigenvalues of T and M KerT 7 All for 1 g i g m Then the following are equivalent 1 T is diagonalizable 2 char poly ofT is of the form and dimKWi di for 1 g i g m 3 dimKV dimKW1 dimKWm 4 VW169Wm Proof Depending on how you try to answer this there are at least four implications to prove Here we ll give the minimum of four 1 a 2 By de nition T diagonalizable implies that V has a basis 121 vn of eigenvectors of T The matrix representation of T with respect to this basis is simply the diagonal matrix with the eigenvalues on the diagonal Thus the characteristic polynomial of T is of the form where di is the number of times i appears on the diagonal That is di is the number of Ai eigenvectors present in the basis 121 vn We just have to verify that each i appears and that it appears precisely dimKUVi times 0 First we verify that each appears Suppose some A does not appear This means that we have a basis for V of eigenvectors of T without ever having to use an eigenvector with eigenvalue But such an eigenvector is an element of V and so can be expressed as a linear combination of the basis elements That is a non zero eigenvector can be expressed as a linear combination of eigenvectors with different eigenvalues But this contradicts the previous lemma 0 Now we have to show that the subspace S spanned by all those basis vectors in 121 vn which correspond to a given eigenvalue say is equal to the eigenspace Wi Clearly exer cise this subspace is contained in Wi To prove the reverse inclusion suppose that v E V is a Ai eigenvector of T Since 121 vn is a basis 1 can be expressed as a linear combination of the 12 We have to show that this combination only involves the 127 which are Ai eigenvectors Well if not then we get a nontrivial linear dependence relation between eigenvectors with distinct eigenvalues Again this contradicts the previous lemma 2 a 3 This is just a dimension countl And it s trivial We know from the de nition that the characteristic polynomial has degree n dimKV Property 2 tells us that n di and it also tells us that d dimKW So we re donell 3 a 4 First we show that the sum W1 Wm is direct To do this it suffices remember Midterm l to prove that Wj 2 W 0 for each j But if this intersection contained a nonzero vector then it could be expressed in two ways as follows WZW 7 Since wj 7 0 then at least one of the w must also be non zero and so we obtain another relation of linear dependence among eigenvectors with distinct eigenvalues thus contradicting the previous lemma Now that we know this sum is direct we can say that W1 63 63 Wm is isomorphic to Wi which is a subspace of V of dimension dimKlV But property 3 says that this is just dimKV Thus Wl is a subspace of V of the same dimension as V Thus Wl V and we re done 4 a 1 If V is a direct sum of eigenspaces7 then we can combine bases for these eigenspaces together to obtain a basis for V Thus T is diagonalizable by de nition Remark 219 We will establish a neat algorithm for checking ifa linear operator is diagonalizable as a corollary of the Primary Decomposition Theorem in the next section 22 Annihilating Polynomials HamiltonCayley Theorem Primary Decomposition Theorem The main theme of this section is that one can understand a linear operator acting on a nite dimensional vector space by analyzing the polynomials which annihilate it A beautiful practical existence result for annihilating polynomials is the Hamilton Cayley Theorem The main result which relates the structure of a linear operator on a nite dimensional vector space to the algebra of one of its annihilating polynomials is the Primary Decomposition Theorem With this tool in hand it will be a very short step to the neat classi cation of diagonalizable operators result which was promised in the previous section and also to the Jordan Normal Form Theorem First we de ne what we mean by an annihilating polynomial De nition 221 Let V be a K vector space and let T E V An annihilating polynomial for T is a polynomial p E such that pT 0 Examples 222 For example the simplest annihilating polynomial for the identity operator is just pz z 7 1 for then PT 0 means T 7 I 0 which is true since T I Here is an existence theorem for annihilating polynomials lt s proof is easy just remember that Kmm is nZ dimensional so that the n21 matrices I A A2 A are linearly dependent Lemma 223 Let A 6 Km Then there is a polynomialp E such that pA 0 In fact we can choose p to have degree at most n Same works for linear transformations of an n dimensional K uector space 2 1 1 1 must satisfy a polynomial equation of degree at most 4 In fact it satis es A2 7 3A 1 I 0 You may also recall from the previous section that z2 7 3x 1 is the characteristic polynomial of A That this is not just a coincidence is the subject of the Hamilton Cayley Theorem Before stating and proving it we develop some intuition about matrix polynomials Examples 224 For example we know from the proof ofthe theorem above that A lt De nition 225 A matrix polynomialover the eld K is a matrix whose entries are polynomials in z say with coef cients in the eld K It may be written as either 191190 1917490 PM E I pml pmn or as Phi P0 zPl 2132 gide where Pj 6 Km Examples 226 Here is an example 1x2 as 7 1 0 0 1 2 1 0 2x117z2 11 a 2 0 a 0 71 29 De nition 227 If P is an m by m matrix polynomial written as Px P0 zPl 2132 de and A E Kmxm then we may de ne PA to be the m by m matrix PA P0 7 AP1 A2132 7 14de In this case you have to be careful about multiplication Note that P QA PA QA but that PQA need not be equal to PAQA For example7 if Pm 13 and Qm zC then PQ 2130 and so we have PAQA ABAC while PQA AZBC These are not necessarily equal if A and B do not commute Lemma 228 Let Pm be a matrix polynomial of size n X n over the eld K and let A 6 Km Then PA 0 if and only if there exists a matric polynomial of size n X n over K such that Pltzgt 7 ltz17Agtc2ltzgt Proof This seems completely intuitive The thing to be careful about is the fact that these matrix polynomials do not have a commutative multiplication We just have to remember the de nition of evaluation of a matrix polynomial at a n x n matric A given above Suppose that PA 0 Then writing Pm out as Px P0zP1zde we get Pm Pm 7 0 Pm 7 PA P0P1dpd7P07AP177Ade 7 u 7 AP1 21 7 A2P2 M 7 Adm 7QO since each 71 7 A7 term can be written as 3574 zj ZA 25A A74 and so the m1 7 A can be completely factored out on the left Conversely7 suppose that Pm m1 7 AQz for some matrix polynomial Q Q0Q1 ere Thus Pm on 2621 z51Q5 7 AQO 7 mAQl 7 7 zeAQe and so we get PltAgt AQ0A2Q1A51Q57AQ07AAQ177A5AQ5 7 0 as required D Theorem 229 HamiltonCayley Let T be a linear transformation on a nite dimensional vector space V If f is the characteristic polynomial of T then 0 Proof Let f E be the characteristic polynomial of T We have to show that fT 0 or equivalently that fA 0 where A E Kmm is the matrix of T with respect to some basis for V Here n dimKV To do this let Pm diagfm fz be the n x n matrix polynomial consisting of fm s along the diagonal and zeros elsewhere Note that Pm fz so that PA fAI and so PA is zero precisely when fA is zero By the previous result it suf ces to nd a polynomial matrix Qm such that PM MrAWW But we already know this from the section on determinants and inversesl namely the ij entry of Qm is simply 71i7detml 7 Aj Note the ij ji switch accounts for the transpose in computing inverses Note that the entries of the n 7 1 x n 7 1 matrix m1 7 Aj are all polynomials in z of degree at most 1 and so the determinant is a polynomial in z of degree at most n 7 1 Thus Qm is clearly a matrix polynomial So we have seen that linear transformations in V and so square matrices satisfy polynomial equations In particular they are roots of their characteristic polynomials Now that we have found a good source of annihilating polynomials we wish to develop a structure theorem for linear transformations based on properties of their annihilating polynomials To do this we need some de nitions and results from polynomial algebra De nition 2210 A polynomial p E is reducible if there exists a factorization P P1192 where p E are polynomials of strictly smaller degree than p If no such factorization exists then p is said to be irreducible Note that irreducibility depends on the base eld K eg 2 1 is irreducible in RM but is reducible in A greatest common divisor gcd of polynomials p1 pn E is a polynomial p E of maximal degree which divides evenly into all of the pi If the gcd of the polynomials p1 pn is 1 or a scalar then we say that the polynomials are relatively prime Theorem 2211 Every polyp E has a decomposition into irreducible factors p1 pn The number n and the factors are uniquely determined upto ordering and multiplication by non zero scalars If the gcd of polynomials p1 pm is 1 then there exists polynomials q1 qm E such that q1p1qmpm 1 Proof We refer the reader to an abstract algebra book for the rst part and give a proof of the second part here Let 5 q1p1qmpm qi EKlml 31 be the set of all linear combinations with polynomial coefficients of the pi Let 1 C 5 have minimal degree Note that we can write d Q1P1 qmpm We claim that d divides evenly into all the pi If not then we can divide some p by d to get a nonzero remainder r which necessarily has smaller degree than 1 Say Pi qdr But we can rearrange this to get r got 7 pi and so r C 5 But this contradicts the minimality of the degree of 1 So we have seen that d is a common divisor of all the pi Since the gcd of all the pi is 1 then 1 must have degree 0 That is 0 7 d C K Replacing all the q above by qid gives the desired expression for the constant polynomial 1 as an element of S D Examples 2212 Find polynomials qi 1 2 3 such that we1gtltz72gtq2ltz72gtltz73gtqsltz73gtltz71gt 1 Hint Thinking about Lagrange polynomials from Midterm ll will help Now we are ready to state and prove the Primary Decomposition Theorem De nition 2213 Let V be a K vector space and let T C V A subspace U C V is said to be an T inuariant subspace if TU C U Theorem 2214 Primary Decomposition Let V be a K uector space and let T C V Suppose that p E is an annihilating polynomial for T which has a decomposition as p p1 Pk where the pi are relatiuely prime Then we haue 1 V kerp1T EB EB kerpkT and each of these are T inuariant subspaces of V 2 The projections m V gt kerp T is a polynomial in T 3 If U C V is T inuariant then U l1U k6rpiT Proof We begin with a few de nitions and some notation De ne k 5i H Pj FLM Note that since the pi are relatively prime the 13 are relatively prime Thus there exist polynomials q C such that q1131qk13k 1 Now we are ready to establish the points of the theorem 32 o The kerpiT are T invariant7 since if v E kerpiT then piTTv TpiTv TO 07 and so T U E kerpiT too 0 V is a sum of the kerpiT Note that q1T131T qkTJ3kT I Thus7 given any 1 E V we can write k U U ZMTWKT i1 All we have to do now is to verify that qiTTv E kerpiT Well7 PiTqiTl3iTU QiTPiTJ3iTU qiTPTU qiT0 0 and we have shown V kerpiT Now we have to show that the sum above is a direct sum This involves showing recall Midterm I that the only element common to each kerpiT and the sum of the remaining kerpjT7s is O Equivalently7 verify this one only has to see that U1 Uk 0 and Ul39 E kerpiT implies that Ul39 0 for all i We see this by the following pretty argument Apply 2 qjT37T I to Ul39 to get I Ui 11139 The other qjT37Tvi terms on the left side vanish since Ul39 E kerpiT and pi is a factor of 13739 when j 7 2 Now use the equation 121 1 0 to substitute in for Ul39 as follows 1 qiT13iTUi qiTJ3iT sz 9139 But this gives ul iqumTm izqiT0 70 0 9139 9139 9139 since as above 13iTvs 0 whenever i 7 3 Therefore we have shown 1 O7 and so the sum is direct At this stage we have established point 1 o 2 will follow once we convince ourselves that ImqiT3iT is the same as kerpiT since qiT3iT is a polynomial in T We have clearly seen above that ImqiT3iT C kerp We have also seen the reverse inclusion where77 implicitly but let s make it explicit here If v E kerpT then u Iv quT13jTUi But we remember that ifi 7 j then T O and so the sum on the right hand side reduces down to qlT3Tv That is Ul397 qiT3iTUi and so u E ImqiTl3iT Finally for 3 suppose that U is T invariant This means if u E U then Tu E U and more generally fTu E U for any polynomial f E By part 1 we have seen that each vector u E U C V can be expressed as a sum ul uk where each u E kerpT By part 2 u can be expressed as a polynomial in T times u and so also belongs to U by T invariance Donel D Examples 2215 Let s look at our motivating examples class notes again These are the projection operators They satisfy T2 T or in other words TT 7 I O The Primary Decom position Theorem tells us that the nite dimensional vector space V on which T acts decomposes as a sum V V0 EB V1 kerT EB k6TltT 7 I of 0 and 1 eigenspaces of T Note that since 1 1x71m71 we have that IT T is the projection onto the 1 eigenspace and that 7IT 7 I I 7 T is the projection onto the O eigenspace Thus the 1 eigenspace is the image of T and is also the kernel of T 7 I while the O eigenspace is the kernel of T and is the image of I 7 T Here is another example Suppose that T2 I Then T 7 IT I 0 and so Primary Decomposition tells us that V is a sum of the 1 and the 71 eigenspaces of T There are three cases Case 1 The 1 eigenspace is all of V Then T I Case 2 The 71 eigenspace is all of V Then T 7I is a central symmetry through the origin Case 3 Both the 1 and the 71 eigenspaces are nonzero Then T is a re ection in the 1 eigenspace In view of the Primary Decomposition Theorem it makes good sense to look for the simplest possible polynomials eg of lowest degree which annihilate T This motivates the following 34 De nition 2216 Let V be a nite dimensional K vector space and let T E V The minimal polynomial of T is the unique monic leading coe icient 1 polynomial of minimal degree which annihilates T The next result says that this concept is indeed well de ned and it establishes a nice relationship between the minimal polynomial and the characteristic polynomial Lemma 2217 Let in be an annihilating polynomial ofT which has minimal degree Then 1 m divides evenly into every other annihilating polynomial of T In particular in divides evenly into the characteristic polynomial of T Also the notion of minimal polynomial is well de ned 2 The minimal polynomial ofT and the characteristic polynomial ofT have the same roots Proof Let f be an annihilating polynomial for T If m does not divide evenly into 1 we can nd a polynomial q and a nonzero polynomial r such that fmqr Moreover the degree of r is strictly less than that of in But rT fT 7 mTqT 0 and so r is an annihilating polynomial of T which has strictly smaller degree than in This contradicts the minimality of the degree of m This has two neat consequences The rst is that m divides the characteristic polynomial of T since the characteristic polynomial annihilates T by Hamilton Cayley The second consequence is the uniqueness of the minimal polynomial If m and m are two annihilating polynomials of T with minimal degree then m divides m and m divides m by the argument above Thus in and m can only differ by at most a scalar multiple Therefore we can uniquely de ne the minimal polynomial by deciding how to choose a scalar multiple We do this by requiring that the leading coefficient of m should be 1 m is called monic Now for part two We ve seen in part one that the minimal poly divides the char poly Thus every root of the minimal poly is automatically a root of the char poly So we have only to prove the reverse implication Suppose A E K is a root of the char poly Thus A is an eigenvalue of T That is there exists a nonzero vector v E V such that Tv Av Thus ij Ajv and more generally where m is the minimal poly But mT 0 and v 7 0 Thus we must have mA O and so A is a root of m D Remark 2218 Here is a direct proof of the fact that 771k 0 implies that k is an eigenvalue of T and hence is a root of the char poly Since k is a root of m we can write where q has degree strictly less that the degree of m By de nition of minimal polynomial this means that q cannot annihilate T Thus there exists a nonzero vector w E V such that qTw 7 0 But T 7 kIqTw mTw 0 and so k is indeed an eigenvalue of T with eigenvector qTw D Here is the characterization of diagonalizable operators as promised earlier Theorem 2219 Characterization of diagonalizable LetV be a nite dimensional K uector space and let T E V Then T is diagonalizable if and only if the minimal polynomial ofT is a product of distinct linear factors Proof Suppose T is diagonalizable This means that there is a basis for V with respect to which the matrix of T is diagonal with repeated eigenvalues along the diagonal Thus the char poly is of the form HW AW 739 where the j index runs over the set of distinct eigenvalues of T and the dj are the number of times the j appears on the diagonal of A which is the same as the dimension of the Aj eigenspace of T It is clear do the matrix multiplication that A satis es the polynomial HW A1 7 where thej index is as above but that it wont satisfy a polynomial of the form above which omits one of the j indices This must be the minimal poly of T since the minimal poly is the monic poly of minimal degree which divides the char poly Conversely Suppose the minimal poly of T is a product HW A1 739 where the j are all distinct The Primary Decomposition Theorem tells us that V is a direct sum of the kerT 7 Ail But each of these is a Aj eigenspace of T Picking bases for each of these direct summands gives a basis of eigenvectors of T for V Thus T is diagonalizable D Finally here s a result about simultaneous diagonalization Theorem 2220 Simultaneous diagonalization Let V be a nite dimensionalK uector space and let ST E V be diagonalizable Then S and T are simultaneously diagonalizable if and only if ST TS Proof Suppose S and T are simultaneously diagonalizable This means there exists a basis 3 for V with respect to which S has matrix A diag1 M and T has matrix B diaga1 Hum Now 1M1 0 M11 0 AB 39 t 39 t BA 0 Mm 0 MM and so ST TS On the other hand7 suppose that S and T are diagonalizable7 and that ST TS Since S is diagonalizable7 we may write V V1 69 GBVk where the Vi are eigenspaces of S Since ST TS7 each of the Vi are T invariant Here s the proof 1 6 Vi implies STU TSU Tw iTU and so T U 6 Vi Now T diagonalizable implies that V W1IVl where each lVl is an eigenspace of T7 and the decomposition corresponds to a decomposition of the minimal polynomial for T into linear factors as shown M1 M Now7 since each W is T invariant7 the last part of the Primary Decomposition Theorem states that l j1 Thus V l Vz W1 391j1 k is a direct sum of intersections of eigenspaces of S and T Choosing bases for each of the W O Wj and combining these together yields a basis for V which consists of simultaneous eigenvectors of S and of T 23 Jordan and Rational Canonical Forms In this section we address two problems that may prevent a linear operator from being diagonal izable First even though the characteristic polynomial may factor into a product of linear terms the minimal polynomial may have some repeated roots Thus the operator is not diagonalizable and so does not have a diagonal matrix representative The next best thing to a diagonal matrix is the so called Jordan Form matrix This is a lower triangular matrix consisting of eigenvalues on the main diagonal 1 s and 0 s just below the diagonal and 0 s elsewhere Secondly the character istic polynomial may not even factor into linear terms over the eld of scalars K and the minimal polynomial may have some high degree irreducible factors In this case we can obtain a canonical matrix representation for the operator called the Rational Form Throughout this section K is a eld V is a nite dimensional K vector space and T E V We begin with the case where the characteristic polynomial of T factors into linear terms but T is not diagonalizable Before stating the existence result for the Jordan Canonical Form we need a de nition De nition 231 Let E K be a scalar A Jordan block of size in is an m x in matrix in Kmxm of the form A 0 1 A JO 0 1 A with As on the diagonal 1 s just below the diagonal and 0 s elsewhere Examples 232 Here are some examples 1 A Jordan block J of size 1 is just the following A 2 Here is a Jordan block J3 of size 2 NO v 3 Here is a Jordan block J5 of size 3 3ch H010 MOO Remark 233 Note that if J is a Jordan block of size in then J 7 Jm is a nilpotent operator whose m th power is 0 but whose m 7 1 st power is non zero Theorem 234 Jordan Forms Existence Let K be a eld and let V be a nite dimensional K uector space Let T E V Then the following are equivalent 38 the characteristic polynomial p ofT factors into linear terms N V decomposes as a direct sum V 1 EB EB V V where N VM km1TerMIWi 3 There is a basis for V with respect to which T has a block diagonal matrix representation as shown J1 0 0 m where for each eigenvalue i there may be several Jordan blocks Ji of various sizes all along the diagonal 4 There is a basis of V with respect to which T has a lower triangular matrix as shown gtk 0 gtk gtk The matrix representation in part 3 above is called the Jordan canonical form of T We abbreviate this to J CF Examples 235 Here are some examples of JCF matrices 1 The form 3 0 0 0 3 0 0 1 3 has one Jordan block of size 1 and one of size 2 The char poly is 733 and the minimal poly is 732 The dimension of the 3 eigenspace is 2 This is not diagonalizable 2 The form 39 OJ q 01 has 3 Jordan blocks of size 1 The char poly is 95 7 33 and the minimal poly is 95 7 3 The dimension of the 3 eigenspace is 3 This is diagonalizable The form 3 0 0 1 3 0 0 1 3 has 1 Jordan block of size 3 The char poly is 95 7 33 and the minimal poly is 95 7 33 The dimension of the 3 eigenspace is 1 This is not diagonalizable The form 3 0 0 0 1 3 0 0 0 0 7 0 0 0 1 7 has 1 J3 block of size 2 and 1 J7 block of size 2 The char poly is 95 7 3290 7 72 and the minimal poly is 95 7 3290 7 72 Each of the 7 and 3 eigenspaces are 1 dimensional This is not diagonalizable The form has 1 J3 block of size 2 and 2 J7 blocks of size 1 The char poly is 95 7 3290 7 72 and the minimal poly is 95 7 3290 7 7 The 3 eigenspace is 1 dimensional7 while the 7 eigenspace is 2 dimensional This is not diag onalizable a The form 3 0 0 0 0 3 0 0 0 0 7 0 0 0 0 7 has 2 J3 blocks of size 1 and 2 J7 blocks of size 1 The char poly is 90 32 72 and the minimal poly is x 7 3 x 7 7 Each of the 7 and 3 eigenspaces are 2 dimensional This is diagonalizable Here s a corollary of the JCF theorem De nition 236 Say that an operator T E V is triangulable if there exists a basis for V with respect to which the matrix of T is lower triangular Corollary 237 Triangulable operators Let V be a nite dimensional K uector space An operator T E V is triangulable if and only if its characteristic polynomial factors as a product of linear terms In particular if the eld K is algebraically closed then euery T E V is triangulable The JCF matrix is uniquely determined by the operator T at least upto ordering of the J along the diagonal as the next result states Theorem 238 Jordan Forms Uniqueness Suppose K is a eld and V is a nite dimen sional K uector space Suppose also that T E V has a Jordan form matrix representatiue Then the number and size of the Jordan blocks in this representatiue are determined by T Speci cally we haue 1 The types J of the Jordan blocks are determined by the characteristic polynomial of T The are precisely the roots of this polynomial 2 The number of Jordan blocks of type J of size in is equal to rankT 7 A1m1 rankT 7 A1m17 2rankT 7 Myquot So the operator T uniquely determines the number type and size of its Jordan blocks That is T uniquely determines its JC F upto the order in which the Jordan blocks appear along the diagonal The next result records some obvious connections between the Jordan canonical form of T and properties of T Theorem 239 Let V be a nite dimensional K uector space and let T E V have a char acteristic polynomial which factors into linear terms ouer K Then the following are true for an eigenualue of T 1 The number of Jordan blocks of type J equals the dimension of the eigenspace of T 41 m The size of the largest Jordan block of type JA equals the multiplicity of A as a root of the minimal polynomial PS The total number of occurrences of A in the JC F ofT equals the multiplicity of A as a root of the characteristic polynomial of T If the characteristic polynomial of an operator T E V does not factor over the eld K into a product of linear terms for example7 2 1 over the eld of real numbers then we can still obtain a useful canonical matrix form for T called the Rational Canonical Form The idea is to use the polynomial to determine the matrix as follows Talk about this at a later date Chapter 3 Inner Pro duct Spaces Recall from Calculus III or from Math 3333 that you can compute the angle between two vectors u ul7 7un and u 1217 7un in R using the law of cosines as follows Start by observing that u7 u and u 7 u are the sides of a triangle in R The law of cosines tells us that 2 2 2 Hu UH Hull HUN 2Hullllvllcos0 where 9 is the angle between u and 1 We use the Pythagorean formula for the length of a vector in R and so get u17v12 u rvn2 u uiv viH u u U v3 cos0 Squaring out the terms on the LHS7 and simplifying gives us 72u1u1 unun 72HuHHuHcosd 141111 Unvn cosd HUHHUH So we see that the term in the numerator is very useful because o It is easy to compute o It has a cool geometric interpretation HullHuHcosd It is called the dot product of the vectors u and u and is often denoted by u 12 Some cool properties that it enjoys include o u 1 1 u for all vectors u and u o 1 ku 12 for all vectors u and u and all real numbers k c uuwuwuwfor all vectors u7 uandw o u u 2 0 and equals 0 if and only if u 0 We take this as our starting point for de ning a real inner product on a real vector space and7 by analogy7 a herrnz39tz39dn product on a complex vector space 43 31 Inner Product Spaces De nition 311 Let V be a real vector space A real inner product on V is a function lt7gtCVgtltVgtRC U7 LUgt gtlt U7wgt which satis es i 74712 127 u for all 741 6 V ii kuvgt kuvgt for all 741 6 V and all k E R iii u 1110 u7 w 1210 for all u mu 6 V i lt1 Ugt 2 O7 and lt1 Ugt 0 if and only ifv 07 for all u E V De nition 312 Let V be a complex vector space7 and let 5 denote the complex conjugate of z E C A hermitian product on V is a function lt7gtCVgtltVgtCC U7 LUgt gtlt U7wgt which satis es 1 my W for all 7M 6 V ii kuvgt kuvgt for all 741 6 V and all k E C iii u 1110 u7 w 1210 for all u mu 6 V iv lt1 Ugt 2 O7 and lt1 Ugt 0 if and only if v 07 for all u E V Examples 313 Here are some examples of real and complex inner product spaces Verify that they are indeed so 1 The usual dot product on R 2 The hermitian product on C de ned by 21 72nw1 71 i1 3 Let Ca7 bC respectively Ca7 bR denote the complex respectively real vector space of continuous complex valued respectively real valued functions on the interval 11 Then 17 ltggt mama is a hermitian product respectively real inner product 4 734 2397 0117 0 ma 5gb 320 7 2zb ya is a real inner product on R3 De nition 314 Let V be a real or complex inner product space We say that uu E V are orthogonal if u 11gt 0 Lemma 315 If U1 1 are non zero mutually orthogonal uectors in a real or complex inner product space V then they are linearly independent Proof Exercise D The following de nitions are direct generalizations of the Calc lll de nitions De nition 316 De ne the length of norm of u E V a real or complex inner product space to be Hull vltv7vgt De nition 317 De ne the angle between two vectors 1 w E V a real inner product space to be c050 M HUHHUH De nition 318 Let u and u be vectors in a real or complex inner product space V The projection of u on u is denoted by projvu and is de ned as W vgt pro tu ltU7UgtU Lemma 319 Let uu E V be uectors in a real or complex inner product space then 1 and u 7 projv are orthogonal Proof Exercise D Theorem 3110 CauchySchwarz Inequality Let V7 lt7 be a real or complex inner prod uct space For all uu E V we have lltu7vgtl2 S Hullzlvll2 In the real case the equality ltU7 vgt holds if and only ifu k1 for some k 2 0 Hulllvll Proof Note that the equality holds if u 0 So suppose that u 7 0 Then the vector proj u is well de ned and we can say 0 g u 7 proj u7 u 7 projvugt ltu ugt 7 ltupmjltugtgt iUU UM U 7lt7gtlt7ltUUgtgt 7 Rearranging remembering that a a and Hullz 1242 gives the desired result In the real case7 if a kv for some k 2 0 then we have lt14 11gt ltkv7vgt kltv7vgt kllvllllvll HUHHUH and so equality holds Conversely7 if equality holds7 then we see that from the proof uiprojvw 0 and so a projv is indeed a multiple of v If a kv where k lt 0 then ltU7Ugt ltkv7vgt kllvllz lt 0 S llullllvlll and so equality would not hold Therefore a must be a positive multiple of v D Remark 3111 Note that the 0 8 inequality is not trivial to prove in the special cases of R C and Ca7 b7 C with the standard inner products de ned above So you should de nitely appreciate the generality7 beauty and simplicity of the proof given above Here are the three versions of 0 8 Zi1iyi2 221 90 2134 for 6 R 21 ME 21 21 for WW 6 C jimmdz ff mom ff wwz Theorem 3112 Let lt7 be a real or complex inner product space Then V gt R 1 gt gt 1111 is a norm on V That is it satis es the following properties 1 Z 0 for allv E V and equality holds if and only ifv 0 2 for all k E R or C and allv E V 3 HvwH S for allvw E V Proof Clearly7 2 0 Now 0 if and only if v12 0 and this is true if and only if v 0 by de nition of inner product positive de niteness ForkECandvveehave lel VltIWJWgt VkEWW lklzllvllz lklllvll Finally7 for 1210 E V we have HvwH2 vwvwgt v11 v10 ltw7Ugt ltw7wgt HUHZltWgtW llwllz llvll2235ltv7wgt llwllz llvll22lltvvwgtlllwll2 llvll22llvllllwllllwllz Hle llwll2 M M where the last inequality follows from Cauchy Schwarz This proves the triangle inequality7 and the theorem D De nition 3113 Let V be an inner product space A basis 1217 vn for V is said to be orthogonal if vi712i 0 Whenever i j and is said to be orthonormal if ltUi7vjgt 51quot Examples 3114 Standard basis on R is orthonormal Theorem 3115 GramSchmidt A nite dimensional inner product space has an orthonor mal basis Proof See Math 3333 for the usual G S orthonormalization process Start from an arbitrary basis 1217 vn and de ne 121 ul Hle and7 inductively7 3971 W 7 221 Prom W W 7 MWquot l11pr0nvj Examples 3116 There are many instances of this in the literature H Let V be the subspace of C7171C spanned by the polynomials 1z7m2 z and equipped with the inner product 1 aw1 mmw Then the G S process applied to the ordered basis 17 m m2 an produces an orthonormal basis of polynomials called Legendre polynomials Compute them to Let W be the subspace of C77r77rC spanned by ellm l 7 n g k g n and equipped with the inner product 7r awL mmw Then elm l 7 n g k g n is an orthonormal basis for W If f E C77r77rC then the orthogonal projection of 1 onto W is given by 71 Z Ckeikm k7n where the coefficients ck fir fme dz are called the Fourier coe cients of f The second example above generalizes First we give a de nition 47 De nition 3117 Let V be an inner product space and let S C V The orthogonal complement of S in V is denoted by SL and is de ned as Si UEVlvs0foralls S lf W C V is a nite dimensional subspace then the orthogonal projection PrW is the unique linear operator in V such that PrWw w for all w E W and PrWu 0 for all u e Wi Lemma 3118 Let V and W be as in the de nition aboue Then there exists a unique projection operator as asserted in the de nition Moreouer if 111 uk is an orthonormal basis for W then PTW is giuen by PTW U uu1ul U Ukgt Uk Proof PrW as de ned above is clearly linear and clearly acts as the identity on W and as the zero transformation on Wt So we see that projection operators exist Now for uniqueness Let T E V be such that TlW llW and TlW1 0 Given any 1 E V we can write 1 wuiw E WWL where w PrWu Thus by linearity of T we get Tu TwTU7w w0 w PTW U and we re done D Remark 3119 The Least Squares Approximation technique of Math 3333 may be neatly phrased in terms of orthogonal complements and projection operators Recall the setup Suppose that the system Ax b where A E Cm z E Cm and b E Cm does not have a solution This means that 1 ImA ColA So we nd the nearest or orthogonal projection of 1 onto ImA and solve for that This is called the least squares solution of the system Am 1 It s not actually a solution but it s the next best thing Here s a trick for nding the least squares solution It relies on the following observation ImAL kerAT Now 1 is a lest squares solution if and only if Au PrImAb That is if and only if A1271 6 ImAi Now Au 7 b E ImAL if and only if ATA U 7 b 0 and this is true if and only if u is a solution of the consistent system ATAx ATb So that s it Simply multiply your inconsistent no solutions equation Am 1 across by AT on the left and solve the resulting consistent system 32 Diagonalization and Spectral Theorem De nition 321 There are special names given to the change of basis matrices between orthonor mal bases in inner product spaces 1 A E Rn is said to be orthogonal if ATA In That is7 A is invertible and A 1 AT 2 A E Cm is said to be unitary if ETA In That is7 A is invertible and A 1 AT We usually denote AT by A It is easy to see that A is orthogonal resp unitary if and only if its rows and likewise its columns form an orthonormal basis for R with the usual dot product resp C with the usual hermitian product De nition 322 Let V and W be inner product spaces either both real or both complex We say that T V a W is an isometry if c T is an isomorphism of vector spaces 0 ltTuTu ltu7u for all uu E V Lemma 323 Let T V gt W be a linear transformation of nite dimensional inner product spaces Then the following are equiualent 1 T is an isometry 2 For any orthonormal basis 7117 7un for V the set Tu1 7Tun is an orthonormal basis for W 3 There exists an orthonormal basis 7117 7un for V such that the set Tu1 7Tun is an orthonormal basis for W Proof 1 a 2 is immediate7 as is 2 a 3 just use G S to obtain an orthonormal basis for V and then apply 2 The work is in proving 3 a 1 Given any u7 u E V then we can write U 041U104nun C 31U1Bnun where ul7 7un is our given orthonormal basis for V with the property that Tul7 7Tun is an orthonormal basis for W Then we have TU TU ltTa1U1 anunT51U1 3mm lt2 WTW BjTUjgt WWTWTTW Z MB u a OMB KW Ujgt 1 lt2 WW7 5WD lt14 vgt 7 and so we see that T is indeed an isometry D Corollary 324 These are all immediate corollaries 1 T E Rn is an isometry with respect to the usual dot product if and only if its standard basis matrix representation TSt is orthogonal N T E Cn is an isometry with respect to the usual hermitian product if and only if its standard basis matrix representation TSt is unitary 93 Two nite dimensional real inner product spaces are isometric if and only if they have the same dimension i Two nite dimensional complex inner product spaces are isometric if and only if they have the same dimension As promised we shall go after a diagonalization theorem First of all we say what it means for an operator T E V on an inner product space to interact nicely with the inner product We shall do this by rst talking about the adjoint of an operator T E V on an inner product space De nition 325 Let T E V be an operator on an inner product space The adjoint of T is an operator in V7 denoted by T which is de ned by ltTU7vgt lt14 TUgt for all uu E V Lemma 326 The notion giuen aboue of the adjoint of an operator T on an inner product space V is well de ned Moreouer if T has matrix representatiue A with respect to some orthonormal basis for V then T has the matrix representatiue 14 with respect to the same basis 50 Proof Let s show linearity of T lf U17 U2 E V then we have u Tv1 v2gt Tuv1 v2 T007110 t ltTU7UZgt 7TUIgt t WTTW 7TU1 TU2gt holding for all u E V In particular7 this holds for all u in some orthonormal basis 3 for V This means that the coefficients of Tv1 v2 with respect to 3 all agree with the coefficients of Tv1 Tv2 with respect to 3 since we ve just shown that the complex conjugates of these coefficients agree Thus T U1 U2 T U1 T U2 and so T respects addition Likewise exercise you can show that T respects scalar multiplication Thus7 T is linear Finally7 let 3 be an orthonormal basis for V Suppose that the matrix of T wrt B is A This means Ali ltTW7 W why not ui Tujgt7 and so if B is the matrix for T we have Bij ltTU 7Wgt ltW7TU gt ltTUi7ujgt A ji39 and so B A as required D De nition 327 0 Say that a linear operator T V a V of a real or complex inner product space is self adjoint if ltTwgt ltuTvgt for all uv E V By the preVious lemmade nition7 this is just the same as saying that T is equal to its own adjoint 0 Say that a matrix A E Kmm K R or C is self adjoint if A A In the real case this becomes AT A and we call the matrix symmetric and in the complex case this is still A A and we call the matrix hermitian Remark 328 Note that an operator T is self adjoint if and only if its matrix wrt an orthonor mal basis is a self adjoint matrix Here s our rst cool theorem Theorem 329 Every self adjoint operator T E V on a nite dimensional inner product space V has a real eigenvalue In fact all eigenvalues ofT are real Moreover eigenspaces with distinct eigenvalues are orthogonal Proof Since nite dimensional inner product spaces of the same dimension are isometric it suffices to consider a self adjoint operator on some C or R Given a self adjoint operator T on R we can consider it as a self adjoint operator on C In this case we know that the characteristic polynomial of T factors as a product of linear terms Thus there are eigenvalues We just have to show that they all must be real Let A E C be an eigenvalue of T with nonzero eigenvector 12 Then we have Alt1212gt ltA1212gt 7Ugt TUgt UvTUgt A12gt gt ltv v H H H H AAAA e e a a A C 1v 7 gt Now 12 31 0 implies that 1212 31 0 positive de niteness and so A A Thus A E R and we re done Finally if u and 12 are eigenvectors of T corresponding to distinct eigenvalues A and a respec tively then we have MUM ltM7vgt ltTu7vgt ltU7Tvgt ltU7Wgt MUM Thus A 7 altu12gt 0 and since A 31 a we get ltu12gt 0 Done D Examples 3210 Note that the assumption of nite dimensionality is crucial here For example the multiplication by x operator M CabC gt CabC f gt gt where xfx for all x 6 ab is clearly self adjoint with respect to the usual inner product b lt2ggt mama but does not have any eigenvalues Now we re ready for our main diagonalization theorem It is one form of the spectral theorem Theorem 3211 Spectral Theorem I Let T E V be a self adjoint operator on a nite dimensional inner product space Then V has an orthonormal basis of eigenuectors of T with real eigenualues Corollary 3212 IfA E Rm is symmetric then there exists an orthogonal matrix P such that PAP 1 PAPT is diagonal If A E Cm is hermitian then there exists a unitary matrix U such that UAU 1 UAU is diagonal with real entries Proof of Theorem Proof is by induction on n dimKV The case n 1 is trivial Suppose theorem holds for self adjoint operator on inner product spaces of dimension n 7 17 and let V have dimension n Now we know T has a non zero eigenvector u with real eigenvalue Let W be the one dimensional space spanned by u and let WL be its n 7 1 dimensional orthogonal complement If u E Wt7 then we have ltuTugt ltTuugt ltuugt ltuugt 0 0 and so Tu E Wt Thus the operator T restricts to WL to give a self adjoint operator on an n 7 1 dimensional inner product space By the inductive hypothesis we know that WL has an orthonormal basis of eigenvectors of TlW1 with real eigenvalues Adding in the vector Tm gives an orthonormal basis for V which is comprised of eigenvectors of T with real eigenvalues Done D Here s the more standard statement of the Spectral Theorem for self adjoint operators Theorem 3213 Spectral Theorem Let T E V be a self adjoint operator on an inner product space V Then there exist mutually orthogonal subspaces W17 Wk of V together with real numbers 17 k such that T ZAlPrwi l and I ZPrWi i Proof Let the i be the eigenvalues of T7 and let the Wl be the corresponding eigenspaces D Remark 3214 If you are just interested in diagonalization of operators on inner product spaces7 and do not require that the eigenvalues be real7 then we see that a necessary condition for diago nalization is that T should commute with its adjoint T ln fact7 this is also a sufficient condition for diagonalization of T We call an operator T which satis es this condition TT TT a normal operator The more general form of the Spectral theorem then reads as follows Theorem 3215 Let T be a normal operator on a nite dimensional complex inner products space or a self adjoint operator on a nite dimensional real inner product space Then V has an orthonormal basis of eigenuectors of T Chapter 4 Miscellaneous Topics 41 Introduction to Linear Groups and Geometry De nition 411 Let K be a eld The general linear group of n x n matrices over K consists of the set of all invertible n x n matrices with values in K under multiplication The special linear group SLn7 K is the subgroup of GLn7 K consisting of all matrices with determinant 1 De nition 412 The projectiue special linear groups PSLnK are de ned by projectivization as follows GIVE DEFINITION What has this all got to do with projectiue geometry De nition 413 A linear group is a subgroup of the general linear group GLn7 De nition 414 The classical groups consist of the orthogonal7 unitary7 and symplectic groups These groups are de ned as stabilizers of various elements of K M under various actions of GLn7 K or SLn7 First we consider the change of basis in a bilinear form action7 which is de ned by GLn7 K x Km 4 Kmm P A H PTAP De nition 415 The orthogonal group OnK is de ned to be the stabilizer of the identity matrix I E Kmm 0n k Stab P e GLnK l PTP I The special orthogonal group SOn7 K is just de ned as SOnK SLnK m 00110 Important cases when K R or C De nition 416 Op7 q7 K is the stabilizer of the signature p7 q form7 W 0P7q7K StabUpy P E GLltPqul lPTIzuzP m1 and SOp7 q7 K OpqK SLp q7 In the case K R p n and q 17 we get the Lorentz groups denoted by On7 1 for short De nition 417 The symplectic group SP2nK usually K R or C is de ned as SP2nK 5mm P e GL2nK PTJP J 7 0 In Lab It is an exercise to see that the symplectic matrices already have determinant 17 so we don t get anything new by intersecting with SLn7 where For the next class of groups we shall restrict to the case K C and consider that action of GLu7 C on Cm by the change of basis for hermitian forms Gums x CW a CM P A H PAP where P denotes the conjugatetranspose of P De nition 418 The unitary groups Un are de ned as Un StabI P 6 Game lPP I and the special unitary groups are de ned as one would expect SUn Un SLu7 C We shall discover some beautiful relationships between SU27 5037 the 3 sphere 3 and the quaternions in section 3927 and between PSL2R7 PSL27 C7 SOu7 1 and hyperbolic geometry in section
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'