New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Linear Algebra

by: Kaylin Wehner

Linear Algebra MATH 115A

Kaylin Wehner
GPA 3.55


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Mathematics (M)

This 87 page Class Notes was uploaded by Kaylin Wehner on Friday September 4, 2015. The Class Notes belongs to MATH 115A at University of California - Los Angeles taught by Staff in Fall. Since its upload, it has received 28 views. For similar materials see /class/177816/math-115a-university-of-california-los-angeles in Mathematics (M) at University of California - Los Angeles.


Reviews for Linear Algebra


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/04/15
DET PM Temeumm Mama PKOPJ UWEZ TRIANULAK OZ UMEK TmAucumz GET 2 P oovtr 0 THE Om ouAL Emma oFT Lows I w T W TelAN uL fL j Itgd 0 IF W 4K So kip k1 quot kmpln 0 001 F U L FOR ALL IL 130139 Mummy u pom 77IIJI1IIL2 Irmaquot 3quot 77 So tn kll Lam kh 1quot km COMPUTNL DENA F02 thwoa LAKAEK CAN 00 6Aussm gamma KimM mm 0 ow smuga 317 0 L lo DET133 D T133gtD f O K39SE Lot 317 043 10 101 a a 1 9 1 026 0035 THMI DEW3m 0EFA3D T A6 Mm WTLKQS Favor QAGXJ 042 1 7e kd JF m 3mm TWO Uws OF A m SAME ram OF A0 Aez 5w QAEO VIWC VI VII IF w 14sz A ITH ZN A0 QJM 1 3 Auo 0x J 131 AM or 39 Mf ow a 55 131 uw 0F 3 6 b v comma Ao F s 0mm CLAIM F gimme 0503 W FWMWAK p WE m 0F 95 IF 0121119 1 9 arm A owe ALSO HA5 L 9 1 woman I St SATLJFLEYH BY UMQUENECJ 0 00 DVAmzmmmo mg on p011wa F DETmim IF Dmmwmm 0y A NEW KANLWKh 86 M6 is avzo I V 16 New AW A030 go waqto so WUwsm 50 av 1mm DirtMPO 30 1 09mm 0 0 TA7lt DENM 0 0 TA0 Tm So THE 9013mm ALSo you IF Agrmzo Tm Ema DEW403 4wa 0mm COWIT WouLg 06 HMng To am ux 03mg THE COR um COK IF AzLU L LT tum I o ON OU KoMLJ D Mfume OETA DMIUz own 0F 0M CONAL mm m 0 F P 06W c0121039an DETlLI W Foxam PM DH 0F TimGama me 5M 90L 0510 A AN OU OGCWAL mm 9 DEr 39 K PF ATAI 9 OiTmTwrzrmw mkI OETlAMmm II 1mm 3 Sb Hand 98 S0m A6 00m 0mm H 3mm OL NdEON L camp DEE W M OKWON0WAL 043839 Qua2 Eon Ifquot 393 MOPEKLY OKlfUTED OK Ktem HANDED u N DETQhoMjmzil 0mm IT IS Lin Hmospf N COMMENT WHO KNEW WAT 084111015 OF m OMUISMNAL SpACE HAO ALGHT AH LEFT HAM X A5131 TA QQUOLU UW HIS woulle WILL CMAMCE Gimme mmmzmnm OF NEVA 39 Rows OF A 1 VHVJHWVM I D TA VOLUME 0F PAWEELPIPEQ GWARD 0y VFW PKOOF AtQK Vt Cg Cameoum K UY WW OMC 0 Walt Zo QQE I 36 Ulq bwq 9m t 0 TLQ f g o 061 q a Q Imauu 01 mum AMLig l Hm vamp VI Q V1quot 11 REL OETKFAHDM m S 0 WA nvh Q11 ingu a IL 2WWow 4 llWlALnPunrt unu Ag VOL 3397 2 Am a WE Alm quotnquotquotquot39 39 omwwa VOL 0F MMLLELHFD nu ml 15 K Kws MATKDQ A mum MATle A 92x2 MWOK OF 38 ls THE DETERMINANT OF A 500mme OF A QoT rm 0y mm 2 Km 4 1 COLUwvs o A 9 1J4 65L 7 7 I C 7 o a j IS JXLSUGMAmX 39 7 5 Y 139 DET39I7IYA7KL MIAm arA THEOKfmgLET A 92 Au uxh MMle 0 RANK n THEM U 50W Rm WI00K OF A IS 40me 9 ALL HMMHD MIAOM OF A Mic 2 KAN Sl2 0F MAXIMAL NayZERO Mmon I o FOK A 73 90 39T 3 0 THAT DETLfAr lw o5 AN AW MIth Suammx as H45 A am 0F 0 0 H0146 Has 30 FOL Amy A Swrrcumg aw on mums 008 Mar aqu 39TNE zz o T HE MszmL Minab If NE 00 A KEILAQE my 3v Wk au Raul O vf REYLAcE MWoLs wwqu 10v 5 THAT M1002 FLU CA thwart NWW OW 4 774 905 NoT HAv65 THE Size 0 The Axum uaN39Wo MWIK SAM pa comm 0am on 1m 1m MULTWLYme A ow on comm m A Aim limo cwwm ukcwwi DOE N WFECT Thi OLE or rut Menu g NOWZSLI Mum1 HamilK M6 ow 440 owa agqvmg3 w AM 006 ANY A To Eon 2 EXPANSION 5 y Mum WE KNOW by 1 H2 Fokmuu FOL oerm THAT FOIL Aw Low L DSTA Qu Smurfth an Sancf wm aim XMLNut WM U 39 THE IE k MM0L F SMMATILR 0F A MATKW OF 394 quot 39Wrm aw AND coww J MMp COFMTOKJ THE m DETA S at In P IN FACT 1F M 4A THftv IMTA usrmy 54 If arm 1amp0 I Sam T 00 or THEE EXAWLE 3 l u g 40 m 3 Damsmm 1 7 mm Firm Low 9 3 0K ALSO Lf oakL0 0 Tt7 43909f USN 5mm am EXAMLE Q a A Kn at Mu H Mquot 0 M F q quot qquot M1 011 4 A39 1 11 qu 14 qu au htqn q a qquot Nyri an Qnqh Il qu b q39LqJJ O D m 1 q l M q 1 an d anquq39Lh CKAMEBs KM To 80va Awe Amxm0ETAWo MTAX quot JL l 0131M 0 L 139 X DEWMM Q WHEN Tm oun 85mg IF VvA canqu o A XL BET w v bvm m 92TH gt 3lr A f quotItx cum 0 A x6 BET 3 1 0 0LT IA NOTE Tm Is A C001 Foanum or Mr A 6069 WN To Semi An mm 3 15 sz 0 L4th EQUAW39IS Milta m339 BET VOA LWEAZ TKANSFOK ATIONS 59 A mm TWSFoMmW 91quotan 23 Two ME Fm A Immx OF L Fok link A MA Lhc or L Pol 3143 H A 6W5 3 02m 0W 6 Oi Dzmwmm DSTA So DEF DETM 0amp1 A FOL Am 61mg 0 L NW3 Viquot V 5 quot 3 51 mm Ar 00TH Em 3 PEKSON EKmxum GAMES mMmz PUNK MK PAIR SCISIaS 1 Mk 0 pfz 39 SC39ong 1 MEAN VLAng em ONE 1am Pmqu Scam UNIS ONE Donor NE Flam I VIEMug ol l l L0 GM 0 PM 171422 SAWS om 40M 394 palm 0 VIEW u EACH 0mm 9ch A 31mm 1 ng Ag Em SimoLrAumova Auo THEM PLMm gumm pow WE BELIEVE WUEW couat 0K Ac mw 15 To PM A STaneey AT Mmen WITH ALL 5 EQUALQ39 thtLY KOwS AND COLUMN 0F OATHo owL Mama Q AN mxm lvwmx COLDMK 0 Q m T Q AN OKTHoeoNAL Q QI e o Foln A ozTHouoR MATmX Mm of QT l MM 1 T EXAMVKE 2 Q a l6 Gino cos a 1 My I i 9 cos 931u6 OF I 0 die BOWemote A 31mm WanWT Halo Fart TH cows OF M O THodauAL Mirrmx POM AND COLDNW OF AB J39M Come 51 cum M ouuw 1 OF A6 39J39 A 3i quotA 42 Row 0F 51 mo or a an Row AG 1 4439 g 4 139 OFB H PKG 0F Coumw Squt39mm A914 12610 by y L 5 quot A39u EMMY 34 rum Jay Coan 139 uquot H I 73 F M l or A9 0qu MA 3 W S39Mti39ntm HAS SIMILAL 0040 It Ywazua QR FACTOKQA YION OF A SQUAK MATm A mxm AL mAmx A INVERTl Li THEOREM WE CAN MMHE AQR Q ORTHOGOIJAL mm Mum R U u TKIANQULAIL With WmA DIAMNAL Emmg ALLVO THIS 0w 6 Dom UNIQUCLV COMEVt 114m 19 A QR Mam wag A ls Nor lMUE T6 36m THE PKAOF ls M081 SIMTUZ PROOF This 15 REALLy Josr Animmx WA 0 SAYING H6 SCAM SCMva Wow Lit VIWHVA BE Thi Comm OF A A WVCLTIGLi So VVA Fem A 613 FOIL K Do SKAhJOJnm Gerth AM OKTHoNomL Am 1mg wn39u V6 56139g NOT IN Gm umor 0M9 J LET QlfUlhicoLUMMS 0F G LU Ra Ian7lech IS 0 LP D AND 70 F 5 50 R4495 395 UPIf TLMMGML DIMoMu was gt0 No m QR mi Jill COLUM ZRQ g ieakz m 5 AzQ e UMQUENEgg OF PK DECOMVOSWQV SEF A Mime WE CAN AKZAME Ang UNRE DIAQMAL mm 0quot A ALc IKICTLY Pam BECAME CAMJmnm use h T A m cosmos In THIS C1954 QK FACToKIZATIOJ IS UNIQUE FKOOFsif 62g Q A A AMlt11ng to z 1mmer quqQ 2 OKTH39JGIWL ti ltT lt lt 3 Uth TWAmags R f39KTQMQ h UT W TH Foa Maw MTquot Olnc gum 4 K2 Wiu SKWTHE W196 Milt gt0 Map 0F 71 TNEQ39 CtUiu THE CLAIMS H 0 0169 L l U or Tut ram ab mm mm was Grim Maw4L WE Q FQC C I 5 Q emen GEOmam OF WFEK TmAwGumz MA YKlLEg Q 1 EZBK0 c Elvt39 iC v I RU Q AKEA org Mm of Twi at PoIDU OF OIA OIWK EMMY 6f K OLDQM or K 22 if 0 2 K 006 PMALLEmm MTquot 506 Whammy HAS BASE wmq Arzta ad ALTITUDE g VOLomszg Room 0F DIAGOAAL Emu 2 0F K GEomm WTEermow OF QK OEQMosmmv AQR Q PK ZSKVQ 01mm M0 Mats Hava Kama VOLUMris VOLUMi 0c PMALLELFqz Wool q o DlAeouAk 3mm 5 0625 39 Emma OF R AGXAQW g DETEKMWANB A 4amp4 AA Axh mAmx WHAT Coum39nw ON A S H m azj a an in A qn all GY GAussw 51m IF Quto QM iu l alga a Qqu 9 ahquot qJl In a a 0 an 75 SW TWJ ASb MA JH 22 again 399 thIz qn z 0 11 IF an0 THE A lM bLf 7 tho 4N9 415 5 Qu kqmqh 46 MAW 6AM FOL 3 0 DETA 1 quay quazx OETEUVIWANT OF A W WANT To 00 T NE 3 Fan mxm MTKKES ALTiKVATf IWENKLTATW 2 Am 901 q Ag aquot A92 an Aka or Mammalian 2 quotAnquot quot19me IllAqll39IlAmr lmzuf Mpquotw b Yltat gt mm2339 lt4 a I 00139 a mm 0407 g 2 1 1 quot L I M 41L 39 4114 qJlqnt Qua 7114 74 qu 4L 3 q 2 L M 01 lqnan u 4123 v N 414145qu 5 athh qzqn M774 lomm 17 AGE OF N93511 SM mfA WHETHE 6 I Qifwt l 0AMquot on ON 5 THEOREM Tum IS ONE AND ONLY ONE FUNCTION mxm MAMdvplEgR SATISFYlvg THE FOLLOWIMc momma D DET12 2 IF A 13 A MM T 40 WWW DETM DUN 3 DU 6 Luvsm nu ma FlQST Row up A a g KEgWG THE nugL laws CONSTANT IT WILL TACK A wwuz To WW THU NOTE SEPAKA TELY KEENdc THEOTMK aw FIXED 94 DEC Aka IF Two low 0 A N THE SAME m 0 TApzTm EXAMW39L Rx CASE Dav ampQD T39C2LDU8 can vat C 3 14 Di mLlt0 Tlt9J 39 quot 0 mam Ld 03h Lowm 0 N Kg qu ok 3d 0 l 0 i M DU xvbang ao omm MOET 0 M om u we 1quot Tau0mm Y39 Normow IF A HAS 6M V39r lvmw A i AND DimA3 DETK wanum Now DEM Dit lea LQin 31 193 mm Z a a J12 01 Jinn 0quot 2 quot GAL DETltlJI Q1J Rn D N 0w IF all OLM AKEJT ALL DlFI amT THEKE IS A RUEHa Row 50 DE Z q 01 ON DH JIquot VQ ALL DImm Main G w sn l Ado m RECALL 12 A 92vath 14 039 39 m 395 A Rimm A OF 39177393 9 141 l gPCl A Au OlSTlNCT 447 Mm BM 2 a muwww09rmrnew VWWM wvs P o F Infm WE AKE Now ALMosT DONE pu 9 WE CAN MWE 110 To 1 0y Smch Hm Q was A cmmw Numbzz op 1 ng CALL Tm mm LLTHU Q3 2 5 9 gt 49 Q 9 21 l SMTCAES 07 92 e 5 wk THEN SGN igLr quot 10 3912 DE r yun nm y 39 JQPWD D 39 DET IJquotQA3 1 k PKGmm pm MAME TH ltE 13 ME way or Mowvar gt To Q QM THAT mum AA EVEN Nvmm OF swmuzs AM ANOTHK TMquot about An 000 Nvmm 0F SW TWS HMng u Sew ts wax Dirmo may Z Z Sew Gymaz mmanm 19 A 0504015710 0F 91 34 WW1 qu 36109 1 1F 5quot Bow 0 W87 Togt elm gt By An 2ng Wmqu 01 Am swam AM sewaka F 210quot VIA A 000 NUMM 0 Am SWlTUQES 3 0c Mam To gt 8v 3 g 80 IF DUCA Emfs SATis ltDz IT MUSE HAVE THE FolWM Mow 50PM Mean To KNOW MoKE Mow EMUTAmMs PEKmUTATION A mMUTMwN OF Vi391 IS A Keom tms as I m 4 0LWHICH 9mm L IS Muth 10 l 13 Li llltfa7f0z lwzi Li 3 ll loot 1979 I tai lS A rx FolALLL 401 FER UWTIDM 0F 5134 0 nk Ham 00 g w my 0o 47 3193 FM Amy pmm w f r WELSE wk 1quot mm 1710 74 J Q ipum Foam pq39m F01quotLL A V MP P m FOK Any f2 i e ha 417 WALLY cunts OACK To 4 39 391 3 9 l 39I M 3 7quot 247 42 ORBIT OF AN EWNT f2 A Emurmm 0F Mum ESTHE ORBIT OF 1 UNOEKfy IS THE ScT p1Lgtk up m W quot39 I gh u EXAMPLE 31 qs 35H 1 4 rm 1 n Hm Vllibzvr A 3AM wow 45w 3443 t 9 FtsyrlhH t3lt 7W13 4H n can WM QJPIZFS b Fi lt2454243 00m 13 In Tm CASE THE OKBITS UNDER 1 13 Have 0am mm 25 GarH HAVE was LEMm F39ML Amy EAMurA1wu 10 0 bayA IF Ebls TH omw a 1 Mail 0mm 0 blAF 9amp 2 Egg F Jag AND smegma ELAEjzsb 3 IF 5 14A 1 imamsz JunkL ow D leg 30 if comma Even a 33 IF KIQWIUJWN 9 F 7179Pl392139 20 to Now r ms lvr MIL4 w 9 1 K A l uuf na 30 It s t so glam Sn Ivamu F 39 lwx THEN EL HAlt Ar MUSI N ELM gum 10quotquot 1 Aw THEM item 50 ktbv 7171 cum I 221 So LlllliLlL1oquot 39l AL ALL WWW 31 Now E HAS A amid So I39JLPza F01 SOME ol39jb k39l GUT MES dcha l so 70 IS A cowAmaan 0 g0J a whip 2 2 IF 6 er 73mg New 40 wane dl gea W in Qt quotp3LlSo Laggo 693 So E E I 996 u k EnEMTMau W m Am AKBUMEM39 Egg My ELrg l f6 EL 56 Math 115A Week 1 Textbook sections 11 16 Topics covered What is Linear algebra Overview of course What is a vector What is a vector space Examples of vector spaces Vector subspaces Span7 linear dependence7 linear independence Systems of linear equations Bases gtlltgtlltgtlltgtllt Overview of course This course is an introduction to Linear algebra Linear algebra is the study of linear transformations and their algebraic properties A transformation is any operation that transforms an input to an out put A transformation is linear if a every ampli cation of the input causes a corresponding ampli cation of the output eg doubling of the input causes a doubling of the output7 and b adding inputs together leads to adding of their respective outputs well be more precise about this much later in the course A simple example of a linear transformation is the map y 3x where the input z is a real nurnber7 and the output y is also a real number Thus7 for instance7 in this example an input of 5 units causes an output of 15 units Note that a doubling of the input causes a doubling of the output7 and if one adds two inputs together eg add a 3 unit input with a 5 unit input to form a 8 unit input then the respective outputs 9 unit and 15 unit outputs in this example also add together to form a 24 unit output Note also that the graph of this linear transformation is a straight line which is where the term llrzear comes from Footnote I use the symbol to mean is de ned as as opposed to the symbol which means is equal to Its similar to the distinction between the symbols and in computer languages such as C or the distinction between causation and correlation In many texts one does not make this distinction and uses the symbol to denote both In practice the distinction is too ne to be really important so you can ignore the colons and read as if you want An example of a rum linear transformation is the map y x2 note now that doubling the input leads to quadrupling the output Also if one adds two inputs together their outputs do not add eg a 3 unit input has a 9 unit output and a 5 unit input has a 25 unit output but a combined 3 5 unit input does not have a 9 25 34 unit output but rather a 64 unit outputl Note the graph of this transformation is very much non linear In real life most transformations are non linear however they can of ten be approcclmated accurately by a linear transformation Indeed this is the whole point of differential calculus one takes a non linear function and approximates it by a tangent line which is a linear func tion This is advantageous because linear transformations are much easier to study than non linear transformations In the examples given above both the input and output were scalar quantities they were described by a single number However in many situations the input or the output or both is not described by a single number but rather by several numbers in which case the input or output is not a scalar but instead a vector This is a slight oversimpli cation more exotic examples of input and output are also possible when the transformation is non linear A simple example of a vector valued linear transformation is given by Newton7s second law F ma or equivalently a One can view this law as a statement that a force F applied to an object of mass m causes an acceleration 17 equal to a Fm thus F can be viewed as an input and a as an output Both F and a are vectors if for instance F is equal to 15 Newtons in the East direction plus 6 Newtons in the North direction ie F 156N7 and the object has mass m 3kg7 then the resulting acceleration is the vector 1 52m52 ie 577152 in the East direction plus 277152 in the North direction Observe that even though the input and outputs are now vectors in this exarnple7 this transformation is still linear as long as the mass stays constant doubling the input force still causes a doubling of the output acceleration7 and adding two forces together results in adding the two respective accelerations together One can write Newton7s second law in co ordinates If we are in three dirnensions7 so that F F17Fy7Fz and a tribal4117 then the law can be written as Fm 1 7am an 0 m 1 Fy 0am gay 0 1 F1 0am an iaz m This linear transformation is associated to the matricv 0 003 sieo O Ogle Here is another example of a linear transformation with vector inputs and vector outputs yr 3 5M 7m 12 2 42 6 this linear transformation corresponds to the matrix gt 357 246 As it turns out every linear transformation corresponds to a matrix although if one wants to split hairs the two concepts are not quite the same thing Linear transformations are to matrices as concepts are to words different languages can encode the same concept using different words We7ll discuss linear transformations and matrices much later in the course Linear algebra is the study of the algebraic properties of linear trans formations and matrices Algebra is concerned with how to manip ulate symbolic combinations of objects and how to equate one such combination with another eg how to simplify an expression such as x 7 3z 5 ln linear algebra we shall manipulate not just scalars but also vectors vector spaces matrices and linear transformations These manipulations will include familiar operations such as addition multiplication and reciprocal multiplicative inverse but also new op erations such as span dimension transpose determinant trace eigen value eigenvector and characteristic polynomial Algebra is distinct from other branches of mathematics such as combinatories which is more concerned with counting objects than equating them or analysis which is more concerned with estimating and approximating objects and obtaining qualitative rather than quantitative properties gtlltgtlltgtlltgtllt Overview of course Linear transformations and matrices are the focus of this course How ever before we study them we rst must study the more basic concepts of vectors and vector spaces this is what the rst two weeks will cover You will have had some exposure to vectors in 32AB and 33A but we will need to review this material in more depth in particular we concentrate much more on concepts theory and proofs than on com putation One of our main goals here is to understand how a small set of vectors called a basis can be used to describe all other vectors in a vector space thus giving rise to a eo ordinate system for that vector space In weeks 3 5 we will study linear transformations and their eo ordinate representation in terms of matrices We will study how to multiply two 4 transformations or matrices as well as the more dif cult question of how to invert a transformation or matrix The material from weeks 1 5 will then be tested in the midterm for the course After the midterm we will focus on matrices A general matrix or linear transformation is dif cult to visualize directly however one can under stand them much better if they can be diagonallzed This will force us to understand various statistics associated with a matrix such as deter minant trace characteristic polynomial eigenvalues and eigenvectors this will occupy weeks 6 8 In the last three weeks we will study lrmer product spaces which are a fancier version of vector spaces Vector spaces allow you to add and scalar multiply vectors inner product spaces also allow you to compute lengths angles and inner products We then review the earlier material on bases using inner products and begin the study of how linear transformations behave on inner product spaces This study will be continued in 115B Much of the early material may seem familiar to you from previous courses but I de nitely recommend that you still review it carefully as this will make the more dif cult later material much easier to handle gtlltgtlltgtlltgtllt What is a vector What is a vector space We now review what a vector is and what a vector space is First let us recall what a scalar is lnformally a scalar is any quantity which can be described by a sin gle number An example is mass an object has a mass of m kg for some real number m Other examples of scalar quantities from physics include charge density speed length time energy temperature vol ume and pressure In nance scalars would include money interest rates prices and volume You can think up examples of scalars in chemistry EE mathematical biology or many other elds The set of all scalars is referred to as the eld of scalars it is usually just R the eld of real numbers but occasionally one likes to work with other elds such as C the eld of complex numbers or Q the eld of rational numbers However in this course the eld of scalars will almost always be R In the textbook the scalar eld is often denoted F just to keep aside the possibility that it might not be the reals R but I will not bother trying to make this distinction Any two scalars can be added subtracted or multiplied together to form another scalar Scalars obey various rules of algebra for instance zy is always equal to y and zgtk yz is equal to gtkygtkz Now we turn to vectors and vector spaces lnformally a vector is any member of a vector space a vector space is any class of objects which can be added together or multiplied with scalars A more popular but less mathematically accurate de nition of a vector is any quantity with both direction and magnitude This is true for some common kinds of vectors most notably physical vectors but is misleading or false for other kinds As with scalars vectors must obey certain rules of algebra Before we give the formal de nition let us rst recall some familiar examples The vector space R2 is the space of all vectors of the form Ly where z and y are real numbers In other words R2 my 6 For instance 74 35 is a vector in R2 One can add two vectors in R2 by adding their components separately thus for instance 1 23 4 46 One can multiply a vector in R2 by a scalar by multiplying each component separately thus for instance 3 gtk 1 2 3 6 Among all the vectors in R2 is the zero vector 00 Vectors in R2 are used for many physical quantities in two dimensions they can be represented graphically by arrows in a plane with addition represented by the parallelogram law and scalar multiplication by dilation The vector space R3 is the space of all vectors of the form xyz where x y z are real numbers R3 xyz yz E R Addition and scalar multiplication proceeds similar to R2 1 23 456 57 9 and 4 gtk 123 4812 However addition of a vector in R2 to a vector in R3 is unde ned 12 345 doesn7t make sense Among all the vectors in R3 is the zero vector 0 0 0 Vectors in R3 are used for many physical quantities in three dimensions such as velocity momentum current electric and magnetic elds force acceleration and displacement they can be represented by arrows in space One can similarly de ne the vector spaces R4 R5 etc Vectors in these spaces are not often used to represent physical quantities and are more dif cult to represent graphically but are useful for describing populations in biology portfolios in nance or many other types of quantities which need several numbers to describe them completely gtlltgtlltgtlltgtllt De nition of a vector space De nition A vector space is any collection V of objects called vec tors for which two operations can be performed Vector additiori which takes two vectors v and w in V and returns another vector v w in V Thus V must be closed under addition Scalar multiplication which takes a scalar c in R and a vector v in V and returns another vector cv in V Thus V must be closed under scalar multiplication Furthermore for V to be a vector space the following properties must be satis ed I Addition is commutative For all v w E V v w w v ll Addition is associative For all uv w E V uvw uvw lll Additive identity There is a vector 0 E V called the zero vector such that 0v v for all v E V lV Additive inverse For each vector v E V there is a vector 7v E V called the additive iriverse of v such that 7v v 0 V Multiplicative identity The scalar 1 has the property that 1v v for all v E V 0 VI Multiplication is associative For any scalars ab E R and any vector 1 E V we have abv ab1 0 VII Multiplication is linear For any scalar a E R and any vectors 1110 6 V we have 11 w cw aw 0 VIII Multiplication distributes over addition For any scalars a b E R and any vector 1 E V we have a b1 cw by gtlltgtlltgtlltgtllt Not very important Remarks 0 The number of properties listed is long but they can be summarized brie y as the laws of algebra work They are all eminently reasonable one would not want to work with vectors for which 1 w 31 w 1 for instance Verifying all the vector space axioms seems rather tedious but later we will see that in most cases we don7t need to verify all of them 0 Because addition is associative axiom ll we will often write expres sions such as u 1 w without worrying about which order the vectors are added in Similarly from axiom Vl we can write things like abv We also write 1 7 w as shorthand for 1 7w 0 A philosophical point we never say exactly what vectors are only what vectors do This is an example of abstraction which appears everywhere in mathematics but especially in algebra the exact sub stance of an object is not important only its properties and functions For instance when using the number three77 in mathematics it is unimportant whether we refer to three rocks three sheep or whatever what is important is how to add multiply and otherwise manipulate these numbers and what properties these operations have This is tremendously powerful it means that we can use a single theory lin ear algebra to deal with many very different subjects physical vectors population vectors in biology portfolio vectors in nance probability distributions in probability functions in analysis etc A similar phi losophy underlies object oriented programming77 in computer science Of course even though vector spaces can be abstract it is often very helpful to keep concrete examples of vector spaces such as R2 and R3 handy as they are of course much easier to visualize For instance even when dealing with an abstract vector space we shall often still just draw arrows in R2 or R3 mainly because our blackboards don7t have all that many dimensions 0 Because we chose our eld of scalars to be the eld of real numbers R these vector elds are known as real vector elds or vector elds over R Occasionally people use other elds such as complex numbers C to de ne the scalars thus creating complex vector elds or vector elds over C etc Another interesting choice is to use functions instead of numbers as scalars for instance one could have an indeterminate s and let things like 4x3 2x2 5 be scalars and 4x3 2x2 5 4 7 4 be vectors We will stick almost exclusively with the real scalar eld in this course but because of the abstract nature of this theory almost everything we say in this course works equally well for other scalar elds A pedantic point The zero vector is often denoted 0 but technically it is not the same as the zero scalar 0 But in practice there is no harm in confusing the two objects zero of one thing is pretty much the same as zero of any other thing gtllt gtllt gtllt gtllt gtllt Examples of vector spaces 0 n tuples as vectors For any integer n 2 1 the vector space R is de ned to be the space of all n tuples of reals 12 zn These are ordered n tuples so for instance 3 4 is not the same as 4 3 two vectors are equal 12 xn and y1y2 yn are only equal if 1 yl 2 yg and xn y Addition of vectors is de ned by 17277 9179277yn 3 1 9172 1127 7 and scalar multiplication by c1x2 xn C1C2 mn The zero vector is and additive inverse is given by 7z1x2 xn 7x1 zz 7 A typical use of such a vector is to count several types of objects For instance a simple ecosystem consisting of X units of plankton Y units of sh and Z whales might be represented by the vector X Y Z Combining two ecosystems together would then correspond to adding the two vectors natural population growth might correspond to mul tiplying the vector by some scalar corresponding to the growth rate More complicated operations dealing with how one species impacts another would probably be dealt with via matrix operations which we will come to later As one can see there is no reason for n to be restricted to two or three dimensions The vector space axioms can be veri ed for R but it is tedious to do so We shall just verify one axiom here axiom Vlll abv cw Hm We can write the vector 1 in the form 1 12 xn The left hand side is then a b1 a bz1z2 xn a bz1 a bz2 a by while the right hand side is cw by a1x2 xn b12 xn az1az2 axn bz1bx2 bxn axl inhazg bxg azn inn and the two sides match since a bx azj by for each j 12 n There are of course other things we can do with R such as taking dot products lengths angles etc but those operations are not common to all vector spaces and so we do not discuss them here Scalars as vectors The scalar eld R can itself be thought of as a vector space after all it has addition and scalar multiplication It is essentially the same space as R1 However this is a rather boring 10 vector space and it is often confusing though technically correct to refer to scalars as a type of vector Just as R2 represents vectors in a plane and R3 represents vectors in space R1 represents vectors in a line The zero vector space Actually there is an even more boring vector space than R the zero vector space R0 also called 0 consisting solely of a single vector 0 the zero vector which is also sometimes denoted in this context Addition and multiplication are trivial 0 0 0 and CO 0 The space R0 represents vectors in a point Although this space is utterly uninteresting it is necessary to include it in the pantheon of vector spaces just as the number zero is required to complete the set of integers Complex numbers as vectors The space C of complex numbers can be viewed as a vector space over the reals one can certainly add two complex numbers together or multiply a complex number by a real scalar with all the laws of arithmetic holding Thus for instance 32239 would be a vector and an example of scalar multiplication would be 5322 1510239 This space is very similar to R2 although complex numbers enjoy certain operations such as complex multiplication and complex conjugate which are not available to vectors in R2 Polynomials as vectors I For any n 2 0 let PAR denote the vector space of all polynomials of one indeterminate variable x whose degree is at most 71 Thus for instance P3R contains the vectors x32x24g 274 715z325x7rg 0 but not 4 z 1 g sinx cm x3 73 Addition scalar multiplication and additive inverse are de ned in the standard manner thus for instance z32247x3z24 3z28 01 and 3z3 2x2 4 3x3 6x2 12 The zero vector is just 0 0 Notice in this example it does not really matter what z is The space PAR is very similar to the vector space Rn indeed one can match one to the other by the pairing Linc aw nil alz a0 ltgt anan1 a1a0 thus for instance in P3R the polynomial x3 2x2 4 would be as sociated with the 4 tuple 1 204 The more precise statement here is that PAR and R 1 are isomorphic vector spaces more on this later However the two spaces are still different for instance we can do certain operations in PAR such as di erentiate with respect to s which do not make much sense for Rn Notice that we allow the polynomials to have degree less than n if we only allowed polynomials of degree eccactly n then we would not have a vector space because the sum of two vectors would not necessarily be a vector see 01 In other words such a space would not be closed under addition 0 Polynomials as vectors II Let PR denote the vector space of all polynomials of one indeterminate variable x regardless of degree In other words PR UO PAR the union of all the PAR Thus this space in particular contains the monomials 1zx2x3z4 though of course it contains many other vectors as well 0 This space is much larger than any of the PAR and is not isomor phic to any of the standard vector spaces R Indeed it is an in nite dimensional space there are in nitely many independent77 vectors in this space More on this later Functions as vectors I Why stick to polynomials Let CR denote the vector space of all continuous functions of one real variable x thus this space includes as vectors such objects as z4z1 sin6 x37fisinltgt lil One still has addition and scalar multiplication sinx em 3 7139 7 sinz x3 em 7139 5sinz em 5 sinx 56m and all the laws of vector spaces still hold This space is substantially larger than PR7 and is another example of an in nite dimensional vector space Functions as vectors II In the previous example the real variable x could range over all the real line R However7 we could instead restrict the real variable to some smaller set7 such as the interval 07 17 and just consider the vector space C071 of continuous functions on 01 This would include such vectors such as x4 z 1 sinz em 3 7139 7 sinx This looks very similar to CR7 but this space is a bit smaller because more functions are equal For instance7 the functions z and are the same vector in CO17 even though they are different vectors in CR Functions as vectors III Why stick to continuous functions Let fR R denote the space of all functions of one real variable R7 re gardless of whether they are continuous or not In addition to all the vectors in CR the space fR R contains many strange objects7 such if z E Q as the function 1 fquot 0 mgq This space is rnuch7 rnuch7 larger than CR it is also in nite di rnensional7 but it is in some sense rnore in nite77 than CR More precisely7 the dimension of CR is countably in nite7 but the dimen sion of fR R is uncountably in nite Further discussion is beyond the scope of this course7 but see Math 112 Functions as vectors IV Just as the vector space CR of continuous functions can be restricted to smaller sets7 the space fR R can also be restricted For any subset S of the real line7 let fS R denote 13 the vector space of all functions from S to R thus a vector in this space is a function f which assigns a real number fx to each n in S Two vectors f 9 would be considered equal if fx gm for each n in S For instance if S is the two element set S 01 then the two functions f 2 and g z would be considered the same vector in f0 1 R because they equal the same value at 0 and 1 Indeed to specify any vector f in 01 one just needs to specify f0 and f1 As such this space is very similar to R2 Sequences as vectors An in nite sequence is a sequence of real numbers 017027037 047 for instance a typical sequence is 24681012 Let R denote the vector space of all in nite sequences These se quences are added together by the rule 0102 b1b2 3 11 b112 b2 and scalar multiplied by the rule Ca1a2 ca1ca2 This vector space is very much like the nite dimensional vector spaces R2 R3 except that these sequences do not terminate Matrices as vectors Given any integers 77171 2 1 we let ManR be the space of all m gtlt n matrices ie m rows and n columns with real entries thus for instance ngg contains such vectors as 1 2 3 0 71 72 4 5 6 7 73 i4 75 39 Two matrices are equal if and only if all of their individual components match up rearranging the entries of a matrix will produce a different matrix Matrix addition and scalar multiplication is de ned similarly to vectors 1 2 3 0 71 72 i 1 1 1 4 5 6 73 i4 75 7 1 1 1 10123 7 102030 4 5 6 7 40 50 60 39 Matrices are useful for many things notably for solving linear equations and for encoding linear transformations more on these later in the course As you can see there are in nitely many examples of vector spaces some of which look very different from the familiar examples of R2 and R3 Nev ertheless much of the theory we do here will cover all of these examples simultaneously When we depict these vector spaces on the blackboard we will draw them as if they were R2 or R3 but they are often much larger and each point we draw in the vector space which represents a vector could in reality stand for a very complicated object such as a polynomial matrix or function So some of the pictures we draw should be interpreted more as analogies or metaphors than as a literal depiction of the situation gtlltgtlltgtlltgtllt Non vector spaces 0 Now for some examples of things which are not vector spaces Latitude and longitude The location of any point on the earth can be described by two numbers eg Los Angeles is 34 N 118 W This may look a lot like a two dimensional vector in R2 but the space of all latitude longitude pairs is not a vector space because there is no reasonable way of adding or scalar multiplying such pairs For instance how could you multiply Los Angeles by 10 340 N 1180 W does not make sense Unit vectors ln R3 a unit vector is any vector with unit length for instance 001 0 71 0 and g 0 g are all unit vectors However the space of all unit vectors sometimes denoted 82 for two dimensional sphere is not a vector space as it is not closed under addition or under scalar multiplication The positive real axis The space R4r of positive real numbers is closed under addition and obeys most of the rules of vector spaces but is not a vector space because one cannot multiply by negative scalars Also it does not contain a zero vector Monomials The space of monomials 12z3 does not form a vector space it is not closed under addition or scalar multiplication gtlltgtlltgtlltgtllt Vector arithmetic The vector space axioms l Vlll can be used to deduce all the other familiar laws of vector arithmetic For instance we have Vector cancellation law If u 1 w are vectors such that uv uw then 1 w Proof Since u is a vector we have an additive inverse iu such that 7uu 0 by axiom lV Now we add 7u to both sides ofuv uw 7uuv 7uuw Now use axiom ll 7uu 1 7uu w then axiom IV 0 1 0 w then axiom Ill 1 w As you can see these algebraic manipulations are rather trivial After the rst week we usually wont do these computations in such painful detail Some other simple algebraic facts which you can amuse yourself with by deriving them from the axioms 01 0 71M 71 7vw iv7w a0 0 17x 16 ia fax gtlltgtlltgtlltgtllt Vector subspaces Many vector spaces are subspaces of another A vector space W is a subspace of a vector space V if W Q V ie every vector in V is also a vector in W7 and the laws of vector addition and scalar multiplication are consistent ie if pl and 112 are in W7 and hence in V7 the rule that W gives for adding pl and 112 gives the same answer as the rule that V gives for adding pl and 112 For instance7 the space P2R the vector space of polynomials of degree at most 2 is a subspace of P3R Both are subspaces of PR7 the vector space of polynomials of arbitrary degree C39017 the space of continuous functions on 017 is a subspace of f01R And so forth Technically7 R2 is not a subspace of R37 because a two dimensional vector is not a three dimensional vector However7 R3 does contain subspaces which are almost identical to R2 More on this later If V is a vector space7 and W is a subset of V ie W Q V7 then of course we can add and scalar multiply vectors in W7 since they are automatically vectors in V On the other hand7 W is not necessarily a subspace7 because it may not be a vector space For instance7 the set 52 of unit vectors in R3 is a subset of R37 but is not a subspace However7 it is easy to check when a subset is a subspace Lemma Let V be a vector space7 and let W be a subset of V Then W is a subspace of V if and only if the following two properties are satis ed W is closed under addition lf wl and LU2 are in W7 then wl LU2 is also in W W is closed under scalar multiplication lf w is in W and c is a scalar7 then cw is also in W Proof First suppose that W is a subspace of V Then W will be closed under addition and multiplication directly from the de nition of vector space This proves the only if77 part 17 Now we prove the harder if part In other words we assume that W is a subset of V which is closed under addition and scalar multiplication and we have to prove that W is a vector space In other words we have to verify the axioms l Vlll Most of these axioms follow immediately because W is a subset of V and V already obeys the axioms l Vlll For instance since vectors 11102 in V obey the commutativity property 111112 02Ul it automatically follows that vectors in W also obey the property wl LU2 LU2 wl since all vectors in W are also vectors in V This reasoning easily gives us axioms I H V VI VII VIII There is a potential problem with lll though because the zero vector 0 of V might not lie in W Similarly with IV there is a potential problem that if w lies in W then 7w might not lie in W But both problems cannot occur because 0 0w and 7w 71w Exercise prove this from the axiomsl and W is closed under scalar multiplication D This Lemma makes it quite easy to generate a large number of vector spaces simply by taking a big vector space and passing to a subset which is closed under addition and scalar multiplication Some exam ples Horizontal vectors Recall that R3 is the vector space of all vectors zyz with yz real Let V be the subset of R3 consisting of all vectors with zero 2 co ordinate ie V Ly 0 xy E R This is a subset of R3 but moreover it is also a subspace of R3 To see this we use the Lemma lt suf ces to show that V is closed under vector addition and scalar multiplication Let7s check the vector addition If we have two vectors in V say x1y10 and x2y20 we need to verify that the sum of these two vectors is still in V But the sum is just 1 zgy1 y20 and this is in V because the z co ordinate is zero Thus V is closed under vector addition A similar argument shows that V is closed under scalar multiplication and so V is indeed a subspace of R3 lndeed V is very similar to though technically not the same thing as R2 Note that if we considered instead the space of all vectors with z co ordinate 1 ie xy 1 my 6 R then this would be a subset but not a subspace7 because it is not closed under vector addition or under scalar multiplication for that matter Another example of a subspace of R3 is the plane x7y72 E R3 z 2y 32 0 A third example of a subspace of R3 is the line t7 2t7 3t t E R Exercise verify that these are indeed subspaces Notice how subspaces tend to be very at objects which go through the origin this is consistent with them being closed under vector addition and scalar multiplication ln R37 the only subspaces are lines through the origin7 planes through the origin7 the whole space R37 and the zero vector space ln R27 the only subspaces are lines through the origin7 the whole space R27 and the zero vector space This is another clue as to why this subject is called linear algebra Even polynomials Recall that PR is the vector space of all poly nomials Call a polynomial even if f f7x for instance7 f 4 2x2 3 is even7 but f 3 1 is not Let PemnR denote the set of all even polynomials7 thus PemnR is a subset of PR Now we show that PemnR is not just a subset7 it is a sub space of PR Again7 it suf ces to show that PemnR is closed under vector addition and scalar multiplication Let7s show its closed un der vector addition ie if f and g are even polynomials7 we have to show that f g is also even In other words7 we have to show that f7x 97x f But this is clear since f7 f and 97x A similar argument shows why even polynomials are closed under scalar multiplication Diagonal matrices Let n 2 1 be an integer Recall that ManR is the vector space of n gtlt 71 real matrices Call a matrix diagonal if all the entries away from the main diagonal from top left to bottom right are zero7 thus for instance OOH ONO WOO is a diagonal matrix Let DAR denote the space of all diagonal n gtlt n matrices This is a subset of MAAR7 and is also a subspace7 because the sum of any two diagonal matrices is again a diagonal matrix7 and the scalar product of a diagonal matrix and a scalar is still a diagonal matrix The notation of a diagonal matrix will become very useful much later in the course Trace zero matrices Let n 2 1 be an integer If A is an n gtlt n matrix7 we de ne the trace of that matrix7 denoted trA7 to be the sum of all the entries on the diagonal For instance7 if 1 2 3 A 4 5 6 7 8 9 then WA 159 15 Let MSXAR denote the set of all n gtlt n matrices whose trace is zero MSMR A 6 MW WA 0 One can easily check that this space is a subspace of MA We will return to traces much later in this course Technically speaking7 every vector space V is considered a subspace of itself since V is already closed under addition and scalar multiplica tion Also the zero vector space 0 is a subspace of every vector space for a similar reason But these are rather uninteresting examples of subspaces We sometimes use the term proper subspace of V to denote a subspace W of V which is not the whole space V or the zero vector space 0 but instead is something in between The intersection of two subspaces is again a subspace why7 For instance7 since the diagonal matrices DAR and the trace zero matrices MSXAR are both subspaces of MAAR7 their intersection DAR MSXAR is also a subspace of MnXAR On the other hand7 the union of two subspaces is usually not a subspace For instance7 the p axis 70 z E R and y axis 07y y E R7 but their union p7 0 x E R U 07y y E R is not why See Assignment 1 for more details In some texts one uses the notation W S V to denote the statement W is a subspace of V l7ll avoid this as it may be a little confusing at rst However the notation is suggestive For instance it is true that if U S W and W S V then U S V ie if U is a subspace of W and W is a subspace of V then U is a subspace of V Why gtlltgtlltgtlltgtllt Linear combinations 0 Let7s look at the standard vector space R3 and try to build some subspaces of this space To get started let7s pick a random vector in R3 say 1 123 and ask how to make a subspace V of R3 which would contain this vector 1 2 3 Of course this is easy to accomplish by setting V equal to all of R3 this would certainly contain our single vector 1 but that is overkill Let7s try to nd a smaller subspace of R3 which contains 1 0 We could start by trying to make V just consist of the single point 123 V 123 But this doesn7t work because this space is not a vector space it is not closed under scalar multiplication For instance 10123 10 2030 is not in the space To make V a vector space we cannot just put 123 into V we must also put in all the scalar multiples of 123 24 6 369 71 7273 000 etc In other words V 2 a123 a E R Conversely the space a123 a E R is indeed a subspace of R3 which contains 1 23 Exercisel This space is the one dimensional space which consists of the line going through the origin and 1 23 0 To summarize what we7ve seen so far if one wants to nd a subspace V which contains a speci ed vector 1 then it is not enough to contain 1 one must also contain the vectors cm for all scalars a As we shall see later the set 11 a E R will be called the span of 1 and is denoted span1 Now lets suppose we have two vectors 1 1 23 and w 001 and we want to construct a vector space V in R3 which contains both 21 i and it Again setting V equal to all of R3 will work but lets try to get away with as small a space V as we can We know that at a bare minimum V has to contain not just i and it but also the scalar multiples a1 and bio of i and it where a and b are scalars But V must also be closed under vector addition so it must also contain vectors such as aobw For instance V must contain such vectors as 371 5w 3123 500 1 369 005 3614 We call a vector of the form a1 bit a linear combination of i and it thus 3 6 14 is a linear combination of 1 2 3 and 001 The space an bw a b E R of all linear combinations of i and w is called the span of i and w and is denoted spanow It is also a subspace of R3 it turns out to be the plane through the origin that contains both i and it More generally we de ne the notions of linear combination and span as follows De nition Let S be a collection of vectors in a vector space V either nite or in nite A linear combination of S is de ned to be any vector in V of the form alol agog anon where a1 an are scalars possibly zero or negative and 111 1 are some elements in S The span of S denoted spanS is de ned to be the space of all linear combinations of S spanS a1o1 a2o2 anin a1an E Rgo1on E S Usually we deal with the case when the set S is just a nite collection S o1in of vectors In that case the span is just an E R spano1in a1o1 agog anon a1 Why Occasionally we will need to deal when S is empty In this case we set the span span of the empty set to just be 07 the zero vector space Thus 0 is the only vector which is a linear combination of an empty set of vectors This is part of a larger mathematical convention7 which states that any summation over an empty set should be zero7 and every product over an empty set should be 1 Here are some basic properties of span Theorem Let S be a subset of a vector space V Then spanS is a subspace of V which contains S as a subset Moreover7 any subspace of V which contains S as a subset must in fact contain all of spanS We shall prove this particular theorem in detail to illustrate how to go about giving a proof of a theorem such as this In later theorems we will skim over the proofs more quickly Proof If S is empty then this theorem is trivial in fact7 it is rather vacuous it says that the space 0 contains all the elements of an empty set of vectors7 and that any subspace of V which contains the elements of an empty set of vectors7 must also contain 07 so we shall assume that n 2 1 We now break up the theorem into its various components a First we check that spanS is a subspace of V To do this we need to check three things that spanS is contained in V that it is closed under addition and that it is closed under scalar multiplication a1 To check that spanS is contained in V7 we need to take a typical element of the span7 say 1101 any where 11 an are scalars and 017vn 6 S7 and verify that it is in V But this is clear since 111 1 were already in V and V is closed under addition and scalar multiplication a2 To check that the space spanS is closed under vector addition7 we take two typical elements of this space7 say 11111 any and blvl bnvn where the 17 and bi are scalars and 1 E S for j 1717 and verify that their sum is also in spanS But the sum is 11111 any blvl bnvn 23 which can be rearranged as 11 bi11 an bnin Exercise which of the vector space axioms l Vlll were needed in order to do this But since 11 b1 an bn are all scalars we see that this is indeed in spanS a3 To check that the space spanS is closed under scalar rnultipli cation we take a typical element of this space say 11111 anvn and a typical scalar c We want to verify that the scalar product Calvl any is also in span11 vn But this can be rearranged as ca vl can zn which axioms were used here Since cal can were scalars we see that we are in spanS as desired b Now we check that spanS contains S It will suf ce of course to show that spanS contains 1 for each 1 E S But each 1 is clearly a linear combination of elements in S in fact 1 11 and 1 E S Thus 1 lies in spanS as desired c Now we check that every subspace of V which contains S also contains spanS In order to stop from always referring to that sub space let us use W to denote a typical subspace of V which contains S Our goal is to show that W contains spanS This the same as saying that every element of spanS lies in W So let 1 11111 any be a typical element of spanS where the a are scalars and 07 E S for j 1 n Our goal is to show that 1 lies in W Since 01 lies in W and W is closed under scalar rnultiplication we see that 1101 lies in W Sirnilarly 12112 anvn lie in W But W is closed under vector addition thus 11111 anvn lies in W as desired This concludes the proof of the Theorem D We remark that the span of a set of vectors does not depend on what order we list the set S for instance spanuvw is the same as spanwvu Why is this The span of a set of vectors comes up often in applications when one has a certain number of moves77 available in a system and one wants to see what options are available by combining these moves We give a example from a simple economic model as follows Suppose you run a car company which uses some basic raw materials lets say money labor metal for sake of argument to produce some cars At any given point in time your resources might consist of x units of money y units of labor measured say in man hours 2 units of metal and w units of cars which we represent by a vector zy 210 Now you can make various decisions to alter your balance of resources For instance suppose you could purchase a unit of metal for two units of money this amounts to adding 72 0 1 0 to your resource vector You could do this repeatedly thus adding 172 0 1 0 to your resource vector for any positive a If you could also sell a unit of metal for two units of money then a could also be negative Of course a can always be zero simply by refusing to buy or sell any metal Similarly one might be able to purchase a unit of labor for three units of money thus adding 7310 0 to your resource vector Finally to produce a car requires 4 units of labor and 5 units of metal thus adding 0 74751 to your resource vector This is course an extremely oversimpli ed model but will serve to illustrate the point Now we ask the question of how much money it will cost to create a car in other words for what price x can we add iz00 1 to our resource vector The answer is 22 because 42001 572010gt 473100gt1lt04751gt and so one can convert 22 units of money to one car by buying 5 units of metal 4 units of labor and producing one car On the other hand it is not possible to obtain a car for a smaller amount of money using the moves available why In other words 722 0 0 1 is the unique vector of the form is 0 01 which lies in the span of the vectors 720 10 3 100 and 0 74 751 25 Of course7 the above example was so simple that we could have worked out the price of a car directly But in more complicated situations where there arent so many zeroes in the vector entries one really has to start computing the span of various vectors Actually7 things get more complicated than this because in real life there are often other constraints For instance7 one may be able to buy labor for money7 but one cannot sell labor to get the money back so the scalar in front of 73100 can be positive but not negative Or storage constraints might limit how much metal can be purchased at a time7 etc This passes us from linear algebra to the more complicated theory of linear programming7 which is beyond the scope of this course Also7 due to such things as the law of diminishing returns and the law of economies of scale7 in real life situations are not quite as linear as presented in this simple model This leads us eventually to non linear optimization and control theory7 which is again beyond the scope of this course This leads us to ask the following question How can we tell when one given vector i is in the span of some other vectors 111112 on For instance7 is the vector 07 17 2 in the span of 1117 3217 101 This is the same as asking for scalars a17 a2 as such that 07172 1117171 1237 27 1 131707 1 We can multiply out the left hand side as a1 3oz a3 a1 2027 a1 a2 as and so we are asking to nd a17 a2 as that solve the equations a1 302 03 0 11 202 1 11 02 13 This is a linear system of equations system77 because it consists of more than one equation7 and linear77 because the variables a17 a2 as only appear as linear factors as opposed to quadratic factors such as a or 12037 or more non linear factors such as sina1 Such a system can also be written in matria form 1 3 1 a1 0 1 2 0 a2 1 1 1 1 a3 2 or schematically as 1 3 1 1 2 0 1 1 1 1 2 To actually solve this system of equations and nd a1 a2 a3 one of the best methods is to use Gaussian elimination The idea of Gaussian elimination is to try to make as many as possible of the numbers in the matrix equal to zero as this will make the linear system easier to solve There are three basic moves Swap two rows Since it does not matter which order we display the equations of a system we are free to swap any two rows of the sys tem This is mostly a cosmetic move useful in making the system look prettier Multiply a row by a constant We can multiply or divide both sides of an equation by any constant although we want to avoid multiplying a row by 0 as that reduces that equation to the trivial 00 and the operation cannot be reversed since division by 0 is illegal This is again a mostly cosmetic move useful for setting one of the co ef cients in the matrix to 1 Subtract a multiple of one row from another This is the main move One can take any row multiply it by any scalar and subtract or add the resulting object from a second row the original row remains unchanged The main purpose of this is to set one or more of the matrix entries of the second row to zero We illustrate these moves with the above system We could use the matrix form or the schematic form but we shall stick with the linear system form for now a1 302 03 0 01 202 a1 02 03 2 We now start zeroing the al entries by subtracting the rst row from the second 01 302 03 0 02 03 1 01 02 03 2 and also subtracting the rst row from the third 01 302 03 0 02 03 1 7202 The third row looks sirnpli able so we swap it up 01 302 03 0 7202 2 02 03 1 and then diVide it by 2 01 302 03 0 12 1 02 03 Then we can zero the 12 entries by subtracting 3 copies of the second row from the rst and adding one copy of the second row to the third If we then multiply the third row by 71 and then subtract it from the rst we obtain 11 3 12 1 13 0 and so we have found the solution namely 11 3 a2 71 a3 0 Getting back to our original problem we have indeed found that 012 is in the span of 111 321 101 012 3111 1321 0101 28 In the above case we found that there was only one solution for al a2 a3 they were emaetly determined by the linear system Sometimes there can be more than one solution to a linear system in which case we say that the system is under determined there are not enough equations to pin down all the variables exactly This usually happens when the number of unknowns exceeds the number of equations For instance suppose we wanted to show that 012 is in the span of the four vectors 111 321 101 001 07172 0117171 0237271 0317071 0407071 This is the system 01 302 03 0 11 202 1 11 a2 123 a4 2 Now we do Gaussian elimination again Subtracting the rst row from the second and third 01 302 03 712 713 1 202 04 Multiplying the second row by 71 then eliminating 12 from the rst and third rows 11 7213 3 12 13 1 213 14 At this stage the system is in reduced normal form which means that starting from the bottom row and moving upwards each equation intro duces at least one new variable ignoring any rows which have collapsed to something trivial like 0 0 Once one is in reduced normal form there isn7t much more simpli cation one can do In this case there is no unique solution one can set 14 to be arbitrary The third equation then allows us to write as in terms of a4 13 7142 29 while the second equation then allows us to write 12 in terms of a3 and thus of a4 12 71713 71142 Similarly we can write 11 in terms of a4 a132a33ia4 Thus the general way to write 0 1 2 as a linear combination of 1 1 1 321 101 001 is 012 3ia41111a423217a42101a4001 for instance setting 14 4 we have 0 1 2 71 11 32 1 7 210 1 4001 while if we set 14 0 then we have 012 3111 71321 0101 000 1 as before Thus not only is 012 in the span of 111 321 101 and 0 0 1 it can be written as a linear combination of such vectors in many ways This is because some of the vectors in this set are redundant as we already saw we only needed the rst three vectors 111 321 and 101 to generate 012 the fourth vector 001 was not necessary As we shall see this is because the four vectors 111 321 101 and 0 01 are linearly dependent More on this later Of course sometimes a vector will not be in the span of other vectors at all For instance 0 1 2 is not in the span of 321 and 101 If one were to try to solve the system 012 0137271 0217071 one would be solving the system 301 12 0 211 1 11 12 30 If one swapped the rst and second rows then divided the rst by two one obtains 11 301 12 0 11 12 Now zeroing the al coef cient in the second and third rows gives 11 12 12 Subtracting the second from the third we get an absurd result 11 12 0 3 Thus there is no solution and 012 is not in the span gtlltgtlltgtlltgtllt Spanning sets 0 De nition A set S is said to span a vector space V if spanS V ie every vector in V is generated as a linear combination of elements of S We call S a spanning set for V Sometimes one uses the verb generated77 instead of spanned thus V is generated by S and S is a generating set for V o A model example of a spanning set is the set 100010001 in R3 every vector in R3 can clearly be written as a linear combination of these three vectors eg 3 7 13 3100 70 10 13001 There are of course sirnilar examples for other vector spaces For in stance the set 1zx2z3 spans P3R why 31 0 One can always add additional vectors to a spanning set and still get a spanning set For instance the set 1 0 0 0 1 0 0 0 1 9 14 23 15 24 99 is also a spanning set for R3 for instance 3 7 13 31 0 070 1 01300 109 14 23 015 24 99 Of course the last two vectors are not playing any signi cant role here and are just along for the ride A more extreme example every vector space V is a spanning set for itself spanV V On the other hand rernoving elements from a spanning set can cause it to stop spanning For instance the two elernent set 1 0 0 0 10 does not span because there is no way to write 3 7 13 for instance as a linear combination of 100 010 0 Spanning sets are useful because they allow one to describe all the vec tors in a space V in terms of a much smaller space S For instance the set S 100 010 00 1 only consists of three vectors whereas the space R3 which S spans consists of in nitely many vec tors Thus in principle in order to understand the in nitely many vectors R3 one only needs to understand the three vectors in S and to understand what linear combinations are 0 However as we see from the above examples spanning sets can contain junk vectors which are not actually needed to span the set Such junk occurs when the set is linearly dependent We would like to now remove such junk from the spanning sets and create a rninirnal77 spanning set a set whose elements are all linearly independent Such a set is known as a basis In the rest of this series of lecture notes we discuss these related concepts of linear dependence linear independence and being a basis gtlltgtlltgtlltgtllt Linear dependence and independence 0 Consider the following three vectors in R3 01 1 2 3 112 111 03 35 7 As we now know the span spani1i2i3 of this set is just the set of all linear combinations of 111 112 113 spani1i2i3 a1111 agvg agiig a1 a2a3 E R 32 Thus for instance 31 23 41 11 1357 101520 lies in the span However the 35 7 vector is redundant because it can be written in terms of the other two mampamp 2023HLLD2mw or more symmetrically 2U1Uz 703 Thus any linear combination of 111 112 113 is in fact just a linear combi nation of U1 and p2 01111 0202 0303 01111 0202 03211 02 11 209 12 09 Because of this redundancy we say that the vectors p1 11213 are linearly dependent More generally we say that any collection S of vectors in a vector space V are linearly dependent if we can nd distinct elements 111n E S and scalars a1 an not all equal to zero such that alpla2p2anpn 0 Of course 0 can always be written as a linear combination of 111 1 in a trivial way 0 011 0pn Linear dependence means that this is not the only way to write 0 as a linear combination that there exists at least one non trivial way to do so We need the condition that the 111 1 are distinct to avoid silly things such as 2111 211 0 In the case where S is a nite set S p1 pn then S is linearly dependent if and only if we can nd scalars a1 an not all zero such that a1p1anpn0 Why is the same as the previous de nition Its a little subtle If a collection of vectors S is not linearly dependent then they are said to be linearly independent An example is the set 1 2 3 0 1 2 it is not possible to nd a1 a2 not both zero for which m z meDQ 33 because this would imply 11 0 211 02 0 7 301 202 0 which can easily be seen to only be true if 11 and 12 are both 0 Thus there is no non trivial way to write the zero vector 0 000 as a linear combination of 1 2 3 and 012 By convention an empty set of vectors with n 0 is always linearly independent why is this consistent with the de nition As indicated above if a set is linearly dependent then we can remove one of the elements from it without a ecting the span Theorem Let S be a subset of a vector space V If S is linearly dependent then there exists an element 1 of S such that the smaller set S 7 1 has the same span as S spanS 7 spanS Conversely if S is linearly independent then every proper subset S g S of S will span a strictly smaller set than S spanS g spanS Proof Let7s prove the rst claim if S is a linearly dependent subset of V then we can nd 1 E S such that spanS 7 spanS Since S is linearly dependent then by de nition there exists distinct 111 1 and scalars a1 an not all zero such that alvlanvn0 We know that at least one of the 17 are non zero without loss of gen erality we may assume that al is non zero since otherwise we can just shuf e the 1 to bring the non zero coef cient out to the front We can then solve for 01 by dividing by 11 Thus any expression involving 01 can instead be written to involve 112 1 instead Thus any linear combination of 01 and other vectors in S not equal to 01 can be rewritten instead as a linear combination of 112 1 and other vectors in S not equal to 111 Thus every linear combination of vectors in S can in fact be written as a linear combina tion of vectors in S711 On the other hand7 every linear combination of S 7 111 is trivially also a linear combination of S Thus we have spanS spanS 7 111 as desired Now we prove the other direction Suppose that S Q V is linearly independent And le S g S be a proper subset of S Since every linear combination of S is trivially a linear combination of S7 we have that spanS Q spanS So now we just need argue why spanS 31 spanS Let 1 be an element of S which is not contained in S such an element must exist because S is a proper subset of S Since 1 6 S7 we have 1 E spanS Now suppose that 1 were also in spanS This would mean that there existed vectors 017vn E S which in particular were distinct from 1 such that 1 1101 1202 any or in other words 71 11111 1202 any 0 But this is a non trivial linear combination of vectors in S which sum to zero it7s nontrivial because of the 71 coef cient of 1 This contradicts the assumption that S is linearly independent Thus 1 cannot possibly be in spanS But this means that spanS and spanS are different7 and we are done D gtllt gtllt gtllt gtllt gtllt Bases o A basis of a vector space V is a set S which both spans S7 but is also linearly independent In other words7 a basis consists of a bare minimum number of vectors needed to span all of V remove one of them7 and you fail to span V 35 Thus the set 100010001 is a basis for R3 because it both spans and is linearly independent The set 1 0 0 0 1 0 0 0 1 9 14 23 still spans R3 but is not linearly independent and so is not a basis The set 100010 is linearly independent but does not span all of R3 so is not a basis Finally the set 1 0 0 2 0 0 is neither linearly independent nor spanning so is de nitely not a basis Similarly the set 1 s 2 3 is a basis for P3R while the set 1 x 1 Lzzwz lms is not it still spans but is linearly dependent The set 1x xzws is linearly independent but doesn7t span One can use a basis to represent a vector in a unique way as a collection of numbers Lemma Let U1121n be a basis for a vector space V Then every vector in 1 can be written uniquely in the form valvlanvn for some scalars a1 an Proof Because 11111n is a basis it must span V and so every vector 1 in V can be written in the form 1101 anvn It only remains to show why this representation is unique Suppose for contradiction that a vector 1 had two di erent representations valvlanvn vb1111bnvn where 11 an are one set of scalars and b1 of scalars Subtracting the two equations we get bn are a di erent set 11 7 b1111 an 7 bnvn 0 But the 111 1 are linearly independent since they are a basis Thus the only representation of 0 as a linear combination of 111 1 is the trivial representation which means that the scalars a1 7 b1 an 7 bn must be equal That means that the two representations 1101 any blvl bnvn must in fact be the same representation Thus 1 cannot have two distinct representations and so we have a unique representation as desired 36 ROOTS OF Om w LEI m w 2 L w 8 2 PRxmmvi m g Loa x 0F1 NW Wm l 0 m l W I W WM Eter ti 3 So FM ANY 466 IF naw K IT39N SAME Angumn39 SNows Tuqr IKt HJWLI Nou m WJ W0 1Am w WH n J1ml S a W5w 1 46 Lo FOR ALL 44 134 EIGENVALUES OF Pizmwmw mmm I A WWW 0F 1m P MATHX TAKING zit 9 2 a nut ALL A K FELMUTATMN M Lpt OF f EXAMLL p 335 45 pawns 2319 GOIG z 513 F o 606 O O MHZ 00010 I I H15 Imzq THm1F 1a HAS 01cm oewmoosmw a 17 JJM 91quot9 STAmm 61 1 4 5 4113 d a 1 THiM THi CHMAK39Kmm EQU Tlou o Pris X 3900 l l 4 My 3 Is omcovmzeem V W t v l Q i J39QLV F04 Hm SD P Cu 5 Qum Wm 64 THi Ramadan AM 9 L4 MI MM Vac 2 mm 27 z 9 0 91 quot 6 Q2 1 11 239 PP Q Lr wm23w 39L2 39 00 21 6 1 1 Q 39 WM97 3T quot 6 M 1 m S 0 VDJV HVH NLE A 6A 0F El fwiCroLS wvnl 869014wa 1 L 49 Q wlwl w 39xw 1 01quot w F91 A CML 9 Do 1m salamwm Fm mu Om FOL Hii igbpsz 2 2 L E G NVALUEg OF SYMMETKK MATKLCE A AT NOW I Avv Anzng THU mfqu v m ltVAw7 l v x 1 03 WW SO Vw7zo FOK A SYMMETKIQ mmmxj Ez mvwoas Pam NFFEKENT EmeALm Ami szquoacUL rg VA LVk lF IF Avv A SYmmzmc AvL g V4 400939 IF who CAWN7 WJATW ltWAv7ltwJsy7 ltWv7 0 30 Aw c v39L PKOWNC TH CLAM B EAL Ado CLAlm A SYMMETQQ AWN lwo 9A ls KEAL u SWIMTQQ MW czg HAW KEAL ElmVALUE FROM1 IF AvIVJTH A3139 So ltAVN7ltvJAVgt M1267 730157 NM IF v Luth ulw Mk V MLw ltV7 14ij mi WU 039 5 4me M7 o lehm 0 IF V1 50 A X 013710 9 A 39 9 REAL SO SYmmm Ian MAmx WE NM 0ETCI HTD HAS OuLv 054139 Mars LET A a A Koor veNA A19 AVFM 1m LET VI smmm W V1 Gadm NOW Wlf a w THtg HA5 A SO N 0F OETAWIgt 1M2 LET Al e A 301mm VLQNMIWXEQ I So AVVL ubew LET v SVANMMWV L avg KfZ COLVG IALm AND VLquotVA so THAT I410 20111de AVE Ad LO37 7 0 i 39 Lit A V uvzu grow IS AN OTHOMOU39Mk GASU F01 V Alb gt313 FOL ALL L IA TEAn3 on Tm 04515 C S WE HAM INST PMVU o M TMquot GNU A Symmm m WWW A W CAM FIND AN OkmogwAL MAmx Q Sim THAT QIAQ D D A WMOMGONAL qux W62 SYMMKTKK MATKKEX CAN 8 6 A QDQTB 55 OM oMAuzau 97 AN 0amp1 H ownAL MAYKxQ COMMENT OKDUg THE EteEMVALw So ARABSA UAVH nvu 39 11 A4 THEM Comma quotAvle 2X31 hvuL iv 50 LAKQEST KATM IS To TAKE V 30n SMALLEST Is hm3 AN ALTEKNATNIZ WAY TO FWD Eleiwmoas 0 A S mmm Mfg mm 15 to MtMm LEMAXMZV PROQEOUKE MAXVVHZE 159 m vx LUNG To ch GU Wx MAXINE 93 on M Git vkxl Kw Gowg Get A BASIS OF ElGENVIiCYOKy Q IS T HEKE A MOKE QEIUKKAL Commnon THAT QUAKAMES A MA TKX IS OlAQQNAUZA Li DEF A MATKX A S NOWK H ATA AAT TAM A NoKMAL A A OIAQONAUZAOLE Cum 3 THIS IS CALLED THE SPECTRAL THEMEV A PFLICAT W 04wa Twm olppggmmaLg Foamw 2 Arm A mm mm 039 6 a S LM39vAM Fall Au i 403 to Now a H 31 4M 5 A ltZYImvlit Klg MArelx WE CAN mm A KLGIO CHAMSE or Coamrma 0 THAT A H IA G AII A EIK WE39GEWALUEJ as H 0 Am ALL A00 6 H OHTWE OKFMTE Mug 0quot4 A work Mlvam FOR 6 v w ALL A lt0 6939 0 97 059A W UB n93 A UKAL I mhum amp N p04 SOME A4003013916 350 IMIUEL NELTHm Law Mw oz Loan Mex WHEN A A503 ALT Tau war In Hana cam 16w Au ADIOWT OF A Lungz mammaron V ltgt TWO VECTOK SMca Wm Mama W ltJ gtw L THEMEh THEKE Is A UNIQUE LINWL TWmemm W Mav 8004 mm ltlewgtw ltVMm7v FM A V KW W PKoor CHOOSE OATHon 3455 gram F01 VJ vequot FoL W A3MAT0X on L FOL 11115 My atquot 6kgt 8 WE WANT Tm To EQUAL ltQiM 7 THE WWOF M HA 19 ltM mlt7q so BtAT Dem MzcLc M2 tumm QHQW THE M HAS me AT A gamma UV7WVMIu gtV C5163 AWL DEQMITMN 39 0 WE CALL THIS WW TKANSFOWTIW OFL OEUonv RLvlvaz ltVD fwvv riot ALLW 139 M Us aquawzm r4455 Lee A L sA THE ADJOUR EXAWLE V TWKE olng gATIAGLE FUNCTIONS ON Ml MATH 6020 M311 ltg t7 Z S mgmo x L 16 63 I 0 046337 Amway 00760 W o Ki7 lt6L gt7 L quot i 505 me SO Lx2 L EXAMFIL IN SAME 3110mm Ug xg NOW I b I M6 2 f x i5 7me KamimL M jan dx a q 0 0 1 lxjj xldx 631 xi 7 S0 x QC TE Lam L Ramown OF L AND 3 W THEth V T W A LINEAL T ANSFOLWWTwJ mi we NuL MLquot 05 0F PRO OF V 5 O mango v w o FOL ALL V6 J WC Mur S ltL w Foem WV 9 limbo ea hieWC VE NLLL H LMo 5 ltLMWWO Foe AU MW L X 6 5 Kimmie FMALL View 4 9 ve IML 3 ML 13le if 7 IM IL Nb Elam FACTS To ow M L KLomi 1 06 U L ltI7U ltJ7V lt17 CM LC L TLLC BS L E ZL M L DEF L15 Sig39ADJ OWT F L23 lt13 7 I i so FOL ANy L LL Mo LL An 3th ADJ owT T1415 VS Tut IwALog a 68 A YVanc mrm L 1 Mom OF L FoL AM SELFAm th 5 0am quotquot mmquot Hquot V I J YM TLLC 00 If ung AU OZWNO ML ASU Foz VTm IF A 1 WE Wm OF L ax 03323 a 50 Li 7 A W n 4 Cayvmu A Srmnctac MM 2 L 47 QJ L DQ 5 FOL AJJ quot 1 39339 WHICH Mm LLL 5 lt5 Lo I I 04C lquot 39 of the 39 y Lectures The suggested format is to have 1 section a week be devoted to a lecture on a topic that supplements the usual lectures The attached syllabus gives a set of suggested topics Assuming that 2 midterms are given the course naturally falls into 3 parts The first 3 supplementary lectures should be devoted to helping the students learn very basic proof techniques Thus in presenting the basic material in Appendices C and D the terms of basic logic should be explained eg contrapositive converse implication negation etc In the 3rd lecture proof by induction should be covered lfthe instructor gives the replacement theorem in the form given in the book it provides a good advanced example of induction and the TA could devote most of that lecture to going over its proof in complete detail After the first midterm it will help the students to review matrix computations Here the TA may refer to Chapters 3 and 4 and review row reduction to compute rank and inverses It is advised that 3 X 3 examples be worked out Also determinant calculations could be reviewed Then an indepth application of diagonalization for example the computation of higher powers of a matrix can be presented The TA with the instructor39s approval should present a topic of interest say from Sec 53 After the second midterm more detailed examples could be discussed A treatment of the inner product space H defined on p 332 and a review of properties of integrals could be given To supplement Sec 63 the TA could give a lecture on least squares from p 360 Another possible topic is minimal solutions to systems of linear equations on p 364 The last lecture before the final could be used as a review session Possible Syllabus for Supplementary Lectures by the Teaching Assistant ext References opics p 552 on Logical Reasoning in proofs illustration of basic techniques of proofs as as a replacement theorem Thm 110 of determinants an powers a eg from 53 or use another example eg linear recurrences space on p of integrals squares p 360 or Minimal Solutions p 364 Possible Syllabus for Supplementary Lectures by the Teaching Assistant Lecture 1 Fields This could include a proof that a2 2122 has no nonzero solutions in integers and consequently no solutions in rationals either as a consequence the real numbers of the form form a eld Lecture 2 Proof by induction The examples of proof should be related to linear algebra Thus the familiar l 2 1 example 221 k2 nn n upper triangular matrix with diagonal entries d1 dn then i detA 7 717 d1 7 t dn 7 t ii A 7 dllnA 7 dgln A 7 ann 0 in MnxnF Students come into 115A knowing enough about determinants to follow the proof of i Lecture 3 One lecture on logic The meaning of implication of the converse and contrapositive of a should not be used One possible proposition would be If A is on a statement and the negation of statements For example if A Z Z and B 1 i nd the matrix representative of T M2X2F 7gt M2X2F de ned by TX AXB relative to the standard basis for M2 gtlt2 Lecture 4 Review row reduction and computation of the rank of matrices Lecture 5 Review matrix inverses and the proof that for n X n matrices A and B detAB detA det The fact will be familiar but the proof will not Lecture 6 Supplementary material in Section 52 consisting of p 272275 and Exercise 15 Some instructors may wish to cover part of Section 53 instead Lecture 7 Discussion of the matrix exponential Lecture 8 Orthogonal polynomials This lecture might have the following content 1 Given a positive weight function Tz on a b the GramSchmidt process yields a sequence Rn of orthogonal polynomials Then the polynomials always satisfy a 3 term recurrence relation Rn1 z 7 anRnz ban1z ii GramSchmidt is extremely tedious as a method of orthogonalizing the sequence 111213 on 711 De ning 1 d Pnz 12 71 we see that the Legendre polynomials are orthogonal on 711 by an inductive argument reinforcing the importance of induction as a means of proof iii Assuming the Lagrange interpolation polynomials have been discussed earlier we let 7 1 lt 7 2 lt lt Tn1 be the roots of Pn1 We assume that they are real and lie in 711 Letting fiz be the Lagrange interpolation polynomials for l S i S n 1 let ki 21 f Then for f a polynomial of degree at most n 1 n1 fzdz Z gem1 1 i1 and this quadrature method remains exact for polynomials of degree S 2n 1 Gaussian quadrature Remark The department has discussed the tendency of students to forget material from their lower division courses immediately after the nal exam I am suggesting this content for Lecture 8 because it does integrate ideas from calculus and from linear algebra If the TA or instructor prefers that less be covered in Lecture 8 i and ii could be included iii omitted and an additional problem from p 353 356 discussed Lecture 9 Review or leeway Lecture 10 An instructive quot of 39 quot quot 39 is to rank one perturbations For example the matrix t a2 ab ac ab t b2 be ac be t c2 Math 115A H B Enderton The Spectral Theorem For the nal topic in this course we combine our work in Chapter 5 on diagonalization with our work in Chapter 6 on inner product spaces We have seen that a linear operator T on a nitedimensional vector space V is diagonalizable if and only if V has a basis of eigenvectors of T Now suppose that V is actually an inner product space Then we might hope to obtain an orthonormal basis of eigenvectors so that T relative to this orthonormal basis is represented by a diagonal matrix or even a real diagonal matrix For example in Problem Set Vlll a certain symmetric matrix A has the property that LA is rep resented by a diagonal matrix relative to an orthonormal basis for R3 And a certain Hermitian matrix B over C has the property that L3 is represented by a real diagonal matrix relative to an orthonormal basis for C3 We want to investigate this phenomenon in general Certainly orthonormal bases are the nicest bases Suppose that B is an orthonormal ordered basis 21 122 12 for an n dimensional inner product space V Then coordinate vectors can be found by using the Fourier coef cients ltU7 71gt lvl lt1 bngt so that v ltvb1gtb1 ltvbngtbn Relative to B a linear operator T is represented by the matrix A whose entries are given by the equation AM ltTbj7bigt As a special case the standard ordered basis el en for C or R is orthonormal and Aij A6 eigt egAeJx And the coordinate map lg relative to an orthonormal basis is an isometry ie it preserves the inner product ltu7vgt ltlul 7 lvl l where the inner product on the left is in V and the inner product on the right is in C or R The following fact gives us cause for optimism regarding symmetric matrices Lemma Consider the space R with the usual inner product and assume that A is an n X n real symmetric matrix Then any two eigenvectors for di erent eigenvalues are orthogona Proof We make use of the fact that for any real matrix B symmetric or not ltBryygt WEE So for a symmetric matrix A ltA17ygt 7149 Now suppose that z and y are eigenvectors for the distinct eigenvalues A and M respectively Then Mm Mm ltAzygt ltz7Aygt ltwygt my Since A y 7 we must have ltzygt 0 and hence I l y d Thus for a real symmetric matrix its various eigenspaces will all be orthogonal to each other For each individual eigenspace EA we can make an orthonormal basis using the Gram7Schmidt process When we combine all these bases for all the different E s the resulting set will still be orthonormal by the lemma On the other hand if A is a real n X n matrix that is not symmetric then as we will see there is no orthonormal basis for R that diagonalizes A So it is natural for us to focus our attention on symmetric matrices As mentioned above for a symmetric matrix ltA17ygt lt17Aygt for z and y in R In fact this property characterizes exactly the symmetric matrices If the displayed equation is always true of A then Aij Aej eigt ejAegt Aeiejgt Aji This is the clue as to which linear operators have the best chance of being represented in the way we seek those operators T with the property that ltTu7vgt ltu7Tvgt for all vectors u and 1 Call such operators selfadjoint Digression Section 63 de nes the adjoint T of T to be the unique operator such that ltTu7vgt ltu7Tvgt always holds Then the selfadjoint operators can be de ned by requiring that T T This de nition is equivalent to the de nition given above In the complex inner product space C the usual inner product is de ned by the equation ltzygt y z A complex n X n matrix A is called Hermitian if A A In particular a real matrix is Hermitian if and only if it is symmetric Note that in a Hermitian matrix all the entries on the main diagonal must be real Recall that for any matrix A we have ltAI ygt lt1 Aygt Thus for a Hermitian matrix A ltA17ygt lt17Aygt for z and y in C Thus multiplication LA by a Hermitian n X n matrix A is a selfadjoint linear operator on C by the above Multiplication by a real symmetric matrix is a selfadjoint linear operator on R Theorem 0 Let T be a linear operator on an n dimensional real or complex inner product space V and let 6 be an orthonormal ordered basis for V Then T is selfadjoint if and only if its representing matrix A relative to B is a symmetric matrix in the real case or a Hermitian matrix in the complex case Proof Suppose that B is an orthonormal ordered basis 121 122 bngt and let A On the one hand if T is selfadjoint then AM ltTbj7bigt ltijTbigt ltTbi7bjgt AM so that A is a Hermitian matrix and if real is symmetric On the other hand if A is a Hermitian matrix then ltTu7vgt ltlTul 7 lvl gt lt14 ul v lvl gt ltlul 7Alvl gt ltlul 7 lTvl gt ltu7Tvgt so that T is a selfadjoint operator 4 In particular any real diagonal matrix D is automatically symmetric and Hermitian So Theorem 0 tells us that if we hope to have T represented by D relative to an orthonormal basis then T must be selfadjoint In other words at most the selfadjoint operators have the property we want What is left is the other direction to show that any selfadjoint operator is indeed represented by a real diagonal matrix relative to an orthonormal basis In Chapter 5 we saw that there were two potential barriers to diagonalization One is that the characteristic polynomial might have nonreal roots in a real vector space The other barrier is that the dimension of some eigenspace EA might fall short of the algebraic multiplicity of A in the characteristic polynomial Theorem 1 below will remove the rst barrier Finally Theorem 2 will remove the secon Our rst objective is to prove that all eigenvalues of a selfadjoint linear operator are real numbers Theorem 1 a Any eigenvalue of a selfadjoint linear operator is real b For a real symmetric matrix any root of its characteristic polynomial is real c For a Hermitian matrix any root of its characteristic polynomial is real Proof a Assume that Tv A1 for nonzero 1 Then on the one hand ltTltvgtvgt ltAwgt MW and on the other hand U7Tvgt MM Him For a selfadjoint operator T these are equal Av v Av 1 Since 1 is nonzero we conclude that A A For part b suppose that A is an n X n real symmetric matrix Let LA be the selfadjoint linear operator de ned by LAz AI on the complex space C Then any root A of the characteristic polynomial for A is an eigenvalue of LA By part a A must be real Part c is the same d We can now upgrade the earlier lemma For a selfadjoint linear operator T any two eigenvectors for dz erent eigenvalues are orthogonal The proof is unchanged Suppose that u and v are eigenvectors for the distinct eigenvalues A and M respectively Then M is real and Auv Auv Tuv uTv uw Muv Since A f M we must have uv 0 and hence u T 1 As a special case where T is multiplication by a Hermitian matrix we can say that any two eigenvectors of a Hermitian matrix for different eigenvalues must be orthogona The main work of this material is done in the proof of the next theorem Theorem 2 Assume that T V A V is a selfadjoint linear operator on a nitedimensional inner product space V real or complex Then V has an orthonormal basis of eigenvectors for T Proof Use induction on dim V Basis The result is clear if dimV 1 Why It is even clearer if dimV 0 Inductive step The characteristic polynomial of T has some root A by Theorem 1 we know that A is real Thus A is an eigenvalue of T even if V is a real inner product space let u be a corresponding eigenvector By normalizing we may suppose that 1 Let W Then W is a proper subspace of V so dimW lt dim V In fact it is not hard to see that if dimV n then dimW n 7 1 By the projection theorem V span u 69 W any vector is the sum of a vector p in spanu and a vector 6 in W So to any basis for W we need add only the one additional vector u to get a spanning set for all of V Claim Whenever w E W then Tw E W That is W is a Tinvariant subspace The reason is that T010711 WNW MM Mwyu 0 which veri es the claim The signi cance of the claim is that the restriction of T to W is a selfadjoint linear operator on W o by the inductive hypothesis W has an orthonormal basis b1 n1 o eigenvectors for T Adjoin the new basis vector In u The result is still orthonormal And now it is a basis of eigenvectors for V d Theorem 2 gives us the nal result we seek Spectral theorem Assume that T is a linear operator on a nitedimensional inner product space V real or complex Then there exists an orthonormal basis of V relative to which T is represented by a real diagonal matrix if and only if T is selfadjoint Proof For the easy direction suppose we have such a basis let D be the real diagonal matrix Then D is Hermitian so by Theorem 0 T is selfadjoint For the hard direction suppose that T is selfadjoint Then we apply Theorem 2 to obtain an orthonormal basis of eigenvectors the representing matrix is real by Theorem 1 and diagonal


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Jennifer McGill UCSF Med School

"Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.