### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# lecture notes MATH 220

WSU

### View Full Document

## About this Document

## 3

## 0

## Popular in Introductory Linear Algebra

## Popular in Applied Mathematics

This 103 page Class Notes was uploaded by Ian Notetaker on Tuesday October 18, 2016. The Class Notes belongs to MATH 220 at Washington State University taught by Michael Kasigwa in Fall 2016. Since its upload, it has received 3 views. For similar materials see Introductory Linear Algebra in Applied Mathematics at Washington State University.

## Popular in Applied Mathematics

## Reviews for lecture notes

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/18/16

Example: Write the general solution of the system whose augmented matrix is given as 2 3 1 4 0 0 ▯15 ▯21 6 7 6 0 0 1 0 ▯2 ▯2 7 4 0 0 0 1 0 2 5 0 0 0 0 0 0 Solution: Note that the augmented matrix is a reduced echelon matrix. Let’s write out the corresponding system: x + 4x ▯ 15x = ▯21 1 2 5 x3 ▯ 2x 5 = ▯2 x4 = 2 Notice that columns 2 and 5 in the above augmented matrix are not pivot columns. There- fore,2x and5x are free variables. The other va1ia3les, 4 , x , and x are basic variables. To obtain the general solution, we simply write each basic variable in terms of the free variables. It follows that 8 > x1= ▯21 ▯ 4x2+ 15x5 > < x2is free General Solution x3= ▯2 + 2x5 > > x4= 2 : x is free 5 ▯ Let’s look at the general solution above. Since there is at least one free variable, there are in▯nitely many choices for our solution. This suggests that a consistent linear system having no free variables is limited to only one (unique) solution. After all, having no free variables means that we do not have the freedom to choose values for any of the variables. This leads us to the next theorem (existence and uniqueness of solutions). Theorem: Existence and Uniqueness A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column ▯ that is, if and only if an echelon form of the augmented matrix has no row of the form ▯ ▯ 0 0 ▯▯▯ 0 b with b nonzero If a linear system is consistent, then the solution set contains either (i) a unique solution (there are no free variables), or (ii) in▯nitely many solutions (there is at least one free variable). Note: The last sentence of the above theorem can be restated as follows: A consistent linear system has a unique solution if and only if every column of the coe▯cient matrix has a pivot. Otherwise, the system has in▯nitely many solutions. Example: Consider the following augmented matrices in echelon form. In each case, de- termine if the solution is consistent. If the system is consistent, determine if the solution is unique. Note: The boxed entries are the pivots. We can apply the above theorem on each aug- mented matrix. 2 3 2 3 1 2 6 4 2 5 7 6 3 (a)4 0 3 7 5 5 (b)4 0 8 3 5 9 5 0 0 4 0 0 0 6 1 4 2 3 2 3 4 1 2 5 3 4 1 8 6 0 3 6 3 7 6 0 9 2 4 7 (c)4 5 (d)4 0 0 5 6 5 0 0 2 6 0 0 0 0 0 0 0 1 Answers: (a) Consistent (every nonzero row has a pivot, and the last column is not a pivot column); Unique (every column in the coe▯cient matrix is a pivot column) (b) Consistent (every nonzero row has a pivot, and the last column is not a pivot column); In▯nitely many solutions (Column 4 in the coe▯cient matrix is not a pivot column) (c) Consistent; Unique (d) Inconsistent (the last column is a pivot column) Section 1.3: Vector Equations De▯nition: A vector is an ordered list of real (or complex) numbers1u 2u ;:::nu expressed as 2 3 u1 6 u 7 u = 6 .7 4 .5 un ▯u ;u ;▯▯▯ ;u ▯ n or as u = 1 2 n . The set of all vectors with n entries is denoted by R . Clearly, a vector can be described as a matrix having either only one column or only one row. Example: ▯ ▯ 2 ▯ ▯ Vectors in R : u = ; v = 4 ▯2 1 The vector u is an example of a column vector (2 ▯ 1 matrix), and the vector v is an example of a row vector (1 ▯ 2 matrix). De▯nition: The vector whose entries are all zero is called the zero vector and is denoted by 0. Remark: Your textbook de▯nes a vector to be a column vector. When we solve matrix- vector equations (at least in this course), our vectors will always be column vectors. However, we may use the term \vector" to mean a row vector (after all, a row vector is technically a vector also). De▯nition: The entries of a vector are called the components of the vector. Equality of Vectors Consider two vectors in R : ▯ ▯ ▯ ▯ u = u1 and v = v1 : u2 v2 We say that u = v if and only if u1= v 1nd u =2v , 2hat is, the two vectors are equal if and only if their corresponding entries (or components) are equal. This de▯nition generalizes n to vectors in R . Geometric Interpretation of Vectors 2 Vectors ha▯ ▯a geometric interpretation that is most easily understood in R . To plot the 1 vector u = 2 , we draw an arrow from the origin to the point (1;2). In a similar fashion, ▯ ▯ we can plot the vector v =▯3 . (See ▯gure below.) 4 Operations on Vectors Vector Addition To add two vectors, we simply add their respective components together. The sum of the two vectors is a new vector. To perform addition on any two vectors, it is necessary that the size of the vectors be the same. Otherwise, vector addition would not be well-de▯ned. Example: Consider again the vectors ▯1▯ ▯▯3▯ u = and v = : 2 4 Then ▯1 ▯ 3▯ ▯▯2▯ u + v = = : 2 + 4 6 Geometric Interpretation of Vector Addition Parallelogram Rule for Addition 2 Let vectors u and v in R form two adjacent sides of a parallelogram with vertices at the origin, the tip of u, and the tip of v. Then the tip of u + v is at the fourth vertex. (See ▯gure below.) Scalar Multiplication De▯nition: A scalar is a single real number, usually denoted by c. To multiply a vector by a scalar, we simply multiply each component of the vector by that scalar. ▯ ▯ Example: Consider again the vector u = 1 . Then 2 ▯ ▯ ▯ ▯ 2 ▯3 2u = 4 and ▯3u = ▯6 : Geometric Interpretation of Scalar Multiplication Suppose that c is a positive scalar and u is a given vector. To draw the vector cu, we can ▯rst draw the vector u and then stretch (or compress) it by a factor of c. Since c is positive, cu points in the same direction as u. In the case where c is negative, cu would point in the direction opposite of u. Try this: Draw the vectors in the previous example and compare 2u and ▯3u to u. Vectors in R n The set of real numbers (scalars), R, is a line (1 dimension). 2 The xy-plane is also called R and is 2-dimensional. Three dimensional space: R . (A vector in R has 3 components.) n-dimensional space: R . (A vector in R has n components.) n Algebraic Properties of R n For all vectors u, v, w in R and all scalars c and d: (i) u + v = v + u (ii) (u + v) + w = u + (v + w) (iii) u + 0 = 0 + u = u (iv) u + (▯u) = ▯u + u = 0, where ▯u denotes (▯1)u (v) c(u + v) = cu + cv (vi) (c + d)u = cu + du (vii) c(du) = (cd)u (viii) 1u = u Linear Combinations n De▯nition: Suppose v ;v ;1::;2 are gipen vectors in R and c ;c ;:::;c1ar2 scalarp. The vector y de▯ned by y = c 1 +1c v 2 2▯▯ + c v p p is called a linear combination of v ;v ;1::2v with peights c ;c ;:::;c1. 2 p Example: Consider the following vectors: ▯ ▯ ▯ ▯ ▯ ▯ 2 0 2 b = ; v1= ; v 2 : 3 1 1 Is b a linear combination of 1 and v 2 In other words, do there exist scalars1c and 2 such that b = c1v 1 c 2 ?2 Solution: Let’s set up the equation b = c v1+1c v 2n2 use our rules for vectors to solve for 1 and c2: ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ 0 2 2 0 2c2 2 c1 1 + c2 1 = 3 =) c + c = 3 1 2 ▯ ▯ ▯ ▯ 2c 2 =) 2 = c1+ c 2 3 Therefore, we must have 2c = 2 2 c1+ c =23 which is a system of linear equations that has the following augmented matrix: ▯ ▯ 0 2 2 : 1 1 3 Notice that the columns of the augmented matrix are exactly the vectors that we started with (v1, v2, and b in that order). If the system is consistent, then b is a linear combination of v and v . Note that the augmented matrix has the following echelon form: 1 2 ▯ ▯ 1 1 3 0 1 1 : Thus, we have c 2 1, and then after back substituting this into the ▯rst equation, we obtain c = 2. It follows that 1 b = 2v + 1v 1 2 which shows that b is a linear combination of v1and v .2▯ The example that we have just completed should convince us that in order to determine if a vector b is a linear combination of the vectors 1 ;v2;:::;vp, we’ll need to set up the augmented matrix ▯ ▯ v 1 v2 ::: vp b and then determine if it corresponds to a consistent system. If it is consistent, then the scalars 1 ;2 ;:::;p , can be found by solving the system, and thus, b is a linear combination of v1;v2;:::;v p Example: Consider the following vectors: 2 3 2 3 2 3 2 3 ▯15 3 ▯1 1 b = 4 1 5 ; v1= 4 15 ; v2= 4 25 ; v3= 4▯1 5 : 0 2 3 ▯2 Determine if b is a linear combinati1n 2f v ,3v , and v . Solution: As in the previous example, let’s set up the augmented matrix ▯ ▯ v v v b 1 2 3 and then determine if the system has a solution: 2 3 3 ▯1 1 ▯15 4 1 2 ▯1 1 5 2 3 ▯2 0 Let’s interchange the ▯rst and second row so that we’ll have the number 1 as our ▯rst pivot: 2 3 1 2 ▯1 1 4 3 ▯1 1 ▯15 5 2 3 ▯2 0 Therefore, we’ll need to apply the following row operations: 2 3 1 2 ▯1 1 ▯3R 1 R !2R ; 22R + R1! R3 3 =) 4 0 ▯7 4 ▯18 5 0 ▯1 0 ▯2 Let’s interchange the second and third row: 2 3 1 2 ▯1 1 4 5 0 ▯1 0 ▯2 0 ▯7 4 ▯18 Since the next pivot is ▯1 (▯rst nonzero entry in row 2), we’ll need to apply the following row operation: 2 3 1 2 ▯1 1 ▯7R + R ! R =) 4 0 ▯1 0 ▯2 5 2 3 3 0 0 4 ▯4 which is equivalent to 2 1 2 ▯1 13 4 0 1 0 25 : 0 0 1 ▯1 By inspection, we see that c = ▯3 and c = 2. Pl2gging these values into the ▯rst equation of the system (which corresponds to the ▯rst row of the matrix) yields c = ▯4. Thus1 b = ▯4v + 1v ▯ v 2 3 which shows that b is a linear combination of v , v , and v . ▯ 1 2 3 Linear Combinations (Continued) Example: Consider the vector equation 2 3 2 3 2 3 2 3 1 2 ▯2 c14▯2 5 + c 24 1 5 + c34 3 5 = 4 5 1 ▯1 2 | {z } | {z } | {z } | {z } v1 v2 v3 b The components of b depend on the choices of the scalars1c 2 c , an3 c . For example, if c1= 1, c2= 2, and c3= 0, we have 2 3 2 3 2 3 2 3 1 2 ▯2 5 4 5 4 5 4 5 4 5 1 ▯2 + 2 1 + 0 3 = 0 1 ▯1 2 ▯1 | {z } | {z } | {z } | {z } v1 v2 v3 b1 which shows that b1is a linear combination of the vectors1v ,2v , an3 v . On the other hand, suppose we choose the scalars to b1 c = 22 c = ▯1, an3 c = 3. Then 2 3 2 3 2 3 2 3 1 2 ▯2 ▯6 4 ▯2 5 4 1 5 4 3 5 4 4 5 2 + (▯1) + 3 = 1 ▯1 2 9 | {z } | {z } | {z } | {z } v1 v2 v3 b2 where b2is another linear combination of the vector1 v2, v , 3nd v . Note: Since there are in▯nitely many choices f1r 2 , c 3 and c , there are in▯nitely many lin- ear combinations of 1 ,2v , an3 v . Suppose that we collect all possible linear combinations of v1, 2 , and3v . This collection (or set) of vectors is given a name, which we now de▯ne. De▯nition: The set of all linear combinations of the vectors v , v , and v is called the 1 2 3 span of this set of vectors. It is denoted by Sp1nf2 ;3 ;v g. Since b can be written as a linear combination of v , v , and v , we say that b is a vector in Spanfv ;v ;v g. 1 2 3 1 2 3 The notion of span generalizes to sets of vectors in R . n De▯nition: Let S = fv 1v 2:::;v p be a set of vectors in R . The span of S, denoted by Span(S), is the set of all linear combinations c1v1+ c2v2+ ▯▯▯ + cpvp where c1;c2;:::;p are scalars. Alternatively, we can say that Span(S) is the subset of R spanned (or generated) by the vectors in S. Example: Recall the vectors ▯ ▯ ▯ ▯ ▯ ▯ 2 0 2 b = 3 ; v1= 1 ; v2= 1 : Since b = 2v +1v 2 (i.e. b is a linear combination of v 1nd v )2 we say that b is in the span of fv 1v 2. Example: Let S = fu ;u 1 wh2re ▯ ▯ ▯ ▯ 1 0 u 1 and u 2 : 0 1 2 Since every vector in R can be written as a linear combination of u a1d u , 2e say that S spans R . Equivalently, we can say that Span(S) = R . 2 Example: Let S = fv ;v 1 wh2re ▯ ▯ ▯ ▯ v 1 1 and v 2 0 : 0 0 In this case, S does not span R . Why not? Solution: Note that every linear combination of v an1 v has2the form ▯ ▯ ▯ ▯ ▯ ▯ 1 0 1 b = c1 0 + c2 0 = c1 0 : This means that every linear combination of v 1nd v i2 some scalar multiple of v 1 However, ▯ ▯ there is at least one vector in R , e.g. b = 2 , that is not a scalar multiple of v . In other 1 1 words, there is no scalar c such that 1 ▯ ▯ ▯ ▯ 2 = c 1 : 1 1 0 ▯ ▯ 2 Thus, there is no way to access the vector using the directions 1 and v 2 and so S could 1 not possibly span R . ▯ More general examples: (a) Spanf0g consists of a single point: the origin. (b) If v is not the zero vector, then Spanfvg is the line through the origin that is parallel to v. In other words, Spanfvg consists of all scalar multiples of v. (c) If v 1nd v are2two nonzero and nonparallel vectors, then Spanfv ;v g is a plane 1 2 through the origin in R . 3 Some questions to consider: (a) If v 1s a scalar multiple of v , t2en v and 1 are par2llel to each other. In this case, what is Spanfv ;v g? 1 2 Answer: Since v = cv1for som2 scalar c, v and v span 1he same 2et of vectors. Thus, we can remove one of the vectors (particularly v ) to ob2ain the same set: Spanfv ;v1g 2 Spanfv g 1 2 which is the line in R that passes through the origin and is parallel to v . 1 (b) In the general examples above, the span of the given set of vectors contains the origin (i.e. it contains the zero vector). Is this true in general? In other words, does the span of any set of vectors always contain the zero vector? Why or why not? Answer: We can always set each scalar coe▯cient to zero: 0 = 0v + 0v + ▯▯▯ + 0v 1 2 p This shows that 0 is in the span of any set of vectors fv ;v ;:::1v g2 Geometpically, Spanfv ;1::;v g apways crosses or touches the origin. Section 1.4: The Matrix Equation Ax = b De▯nition (Matrix-vector multiplication): If A is an m▯n matrix, with col1mns an;:::;a and if x is in R , then the product of A and x, denoted by Ax, is the linear combination of the columns of A using the corresponding entries in x as weights; that is, 2 3 x1 ▯ ▯6 x27 Ax = a1 a2 ▯▯▯ an 4 . 5 = x1a1+ x2a2+ ▯▯▯ + n n . x n Example: Calculate Ax for 2 3 ▯ ▯ 3 1 2 ▯1 3 6 1 7 A = and x = 4 5 ▯2 1 3 4 2 ▯1 using the de▯nition of Ax. Solution: To compute Ax using the de▯nition above, we will ▯rst multiply the ▯rst compo- nent of x to the ▯rst column of A, then multiply the second component of x to the second column of A, and so on. We then add all the results together to obtain ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ 1 2 ▯1 3 Ax = 3 + 1 + 2 + (▯1) : ▯2 1 3 4 We can now simplify this result by applying scalar multiplication and vector addition: ▯ ▯ 0 Ax = ▯3 Note: Since each column of A is multiplied by the corresponding component of x, it is necessary that the number of columns of A equals the number of components of x. Row-Vector Rule for Computing Ax If the product Ax is de▯ned, then the ith entry in Ax is the sum of the products of corre- sponding entries from row i of A and from the vector x. Note that when using the de▯nition of Ax, we are applying the calculations to each column. When using the row-vector rule, we are applying the calculations to each row. Example: Calculate Ax for 2 3 3 ▯ ▯ 6 7 A = 1 2 ▯1 3 and x = 6 1 7 ▯2 1 3 4 4 2 5 ▯1 using the row-vector rule for computing Ax. Solution: To compute Ax using the row-vector rule, we will ▯rst multiply each entry of the ▯rst row of A to each corresponding entry of x. We will then apply the same approach to the entries of the second row of A: 2 3 ▯ ▯ 3 ▯ ▯ ▯ ▯ 1 2 ▯1 3 6 17 3(1) + 1(2) + 2(▯1) + (▯1)(3) 0 Ax = ▯2 1 3 4 4 25 = 3(▯2) + 1(1) + 2(3) + (▯1)(4)▯3 ▯1 Example: Consider the system x1+ x =26 ▯3x1+ x =22: (a) Write the system as a vector equation. (b) Write the system as a matrix equation (Ax = b). (c) Find a vector x that satis▯es the equation in part (b). Solution: (a) From what we’ve learned about the equality of two vectors (in Section 1.3), we can write the system in vector form as ▯ ▯ ▯ ▯ x 1 x 2 6 = : ▯3x 1 x 2 2 Furthermore, from what we know about adding two vectors, we can split the vector on the left side to get ▯ ▯ ▯ ▯ ▯ ▯ x 1 x2 6 ▯3x + x = 2 : 1 2 Lastly, from what we know about scalar multiplication, we can factor out the variables x 1nd x t2 obtain ▯ ▯ ▯ ▯ ▯ ▯ 1 1 6 x1 ▯3 + x 2 1 = 2 | {z } |{z} |{z} v1 v2 b as our vector equation. (b) From the de▯nition of Ax, we can write our vector equation as a matrix equation: ▯ ▯ ▯ ▯ ▯ ▯ 1 1 x1 = 6 ▯3 1 x2 2 | {z } |z} |{z} A x b (c) To ▯nd a vector x that satis▯es our matrix equation, we will need to set up the associated augmented matrix and row reduce it to echelon form: ▯ ▯ ▯ ▯ ▯ ▯ 1 1 6 1 1 6 1 1 6 ▯3 1 2 ▯ 0 4 20 ▯ 0 1 5 In the last augmented matrix, we see that 2 = 5. Back substituting yields 1 = 1. Thus ▯ ▯ ▯ ▯ x = x1 = 1 : x2 5 Row Picture vs. Column Picture Consider the system that we just solved: x1+ x 2 6 ▯3x + x = 2: 1 2 Let’s look at two di▯erent geometric interpretations of the solution of this system. We will call these the \row picture" and the \column picture". Row Picture The row picture is the visualization that we are most familiar with. Here, we work with the two rows (or two equations) of the augmented matrix separately. We sketch the two lines in our system and ▯nd their point of intersection. We already know after solving this system that the point of intersection is1(x 2x ) = (1;5). Column Picture The column picture is a visualization that is based on the columns of the associated aug- mented matrix. Let’s write the given system in vector form: ▯ ▯ ▯ ▯ ▯ ▯ 1 1 6 x1 ▯3 + x 2 1 = 2 | {z } |{z} |{z} v1 v2 b We already know after solving the system that there exist x 1nd x su2h that the vector equation is true. Since (1 ;2 ) = (1;5) is a solution (in fact, it is the only solution), we have ▯ ▯ ▯ ▯ ▯ ▯ 1 1 6 1 + 5 = ▯3 1 2 | {z } |{z} |{z} v1 v2 b which shows that the b is a linear combination of v1and v 2 Figure 1: The row picture (left) and the column picture (right) of the given system. Example: Consider the system 2x1 + x 3 2 x1+ 2x2▯ x =3▯1 4x2+ 4x 3 3 (a) Write the system as a vector equation. (b) Write the system as a matrix equation (Ax = b). (c) Find a vector x that satis▯es the equation in part (b). Solution: (a) Applying the same procedure as in part (a) of the previous example yields 2 3 2 3 2 3 2 3 2 0 1 2 x1415 + x2425 + 34 ▯15 = 4▯1 5 0 4 4 3 as our vector equation. (b) By the de▯nition of Ax, we obtain our matrix equation: 2 32 3 2 3 2 0 1 x1 2 4 1 2 ▯1 54x25 = 4▯1 5 0 4 4 x3 3 | {z } {z } | {z } A x b (c) To ▯nd a vector x that satis▯es our matrix equation, we will need to set up the associated augmented matrix and row reduce it to echelon form: 2 3 2 3 2 3 2 0 1 2 1 2 ▯1 ▯1 1 2 ▯1 ▯1 ▯ ▯ 4 5 4 5 4 5 A b = 1 2 ▯1 ▯1 ▯ 2 0 1 2 ▯ 0 ▯4 3 4 0 4 4 3 0 4 4 3 0 4 4 3 2 3 2 3 1 2 ▯1 ▯1 1 2 ▯1 ▯1 ▯ 4 0 ▯4 3 4 5 ▯ 4 0 ▯4 3 4 5 0 0 7 7 0 0 1 1 It follows tha3 x = 1. Back substituting this value into the second equation (corre- sponding to the last augmented matrix) yields x = ▯1=4. Finally, substituting the 2 values of1x and2x into the ▯rst equation 1ields x = 1=2. Thus, we obtain 2 3 2 3 x1 1=2 x = 4x25 = 4▯1=4 5 x3 1 Writing a Solution Set (of a Consistent System) in Parametric Vector Form Example: Describe all solutions of the system 2x1+ 2x2+ 4x 3 8 ▯4x 1 4x 2 8x =3▯16 ▯ 3x2▯ 3x 3 12 Then compare them to the solutions of the associated homogeneous system. Let’s follow the steps below. (Note that these are exactly the same steps that we followed in the last example of last Thursday’s lecture.) 1. Row reduce the augmented matrix to reduced echelon form. 2. Express each basic variable in terms of any free variables appearing in an equation. 3. Write a typical solution x as a vector whose entries depend on the free variables, if any. 4. Decompose x into a linear combination of vectors (with numeric entries) using the free variables as parameters. Solution: Let’s reduce the augmented matrix to reduced echelon form: 2 3 2 3 ▯ ▯ 2 2 4 8 1 1 2 4 A b = 4 ▯4 ▯4 ▯8 ▯16 5 ▯ 4 ▯1 ▯1 ▯2 ▯4 5 0 ▯3 ▯3 12 0 1 1 ▯4 2 3 2 3 1 1 2 4 1 1 2 4 ▯ 4 0 0 0 0 5 ▯ 4 0 1 1 ▯4 5 0 1 1 ▯4 0 0 0 0 Note that the last augmented matrix is an echelon matrix. We are one step away from reducing it to reduced echelon form. Multiplying the second row by ▯1 and adding to the ▯rst row yields 2 3 1 0 1 8 ▯A b ▯ ▯ 4 5 0 1 1 ▯4 0 0 0 0 Since the ▯rst two columns are pivot colum1s, x 2nd x are basic variables. Th3refore, x is a free variable. Let’s write out the system and express each basic variable in terms of the free variable: x1 + x3= 8 x2+ x3= ▯4 It follows that the general solution is x1= 8 ▯ x3 x2= ▯4 ▯ x3 x3= free Therefore, the solution vector x of the given nonhomogeneous system is 2 3 2 3 2 3 2 3 x1 8 ▯ 3 8 ▯1 4 5 4 5 4 5 4 5 x = x2 = ▯4 ▯ x3 = ▯4 + x3 ▯1 (1) x3 x 3 0 1 which is written in parametric vector form (3here x is regarded as a parameter). Let’s compare the solution set of the given system to the solution set of the corresponding ▯ ▯ homogeneneous system: Ax = 0. Note that the augmented matrix has the following reduced echelon form: 2 3 1 0 1 0 4 0 1 1 05 0 0 0 0 From here, it is easy to show that all solutions of Ax = 0 are of the form 2 3 ▯1 x = x34 ▯1 5 (2) 1 Comparing (1) and (2), we see that the solution set of Ax = b is the line through the point 2 3 ▯1 (8;▯4;0) in R parallel to the vector 5. The solution set of Ax = 0 consists of all 1 2 3 2 3 ▯1 8 scalar multiples of 5. Note that the vector4 5 is one particular solution of Ax = b 1 0 [corresponding t3 x = 0 in (1)]. ▯ Note: If you have any basic variables on the right-hand side of your general solution, some- thing went wrong. This is often a sign that your matrix was not in reduced echelon form when you used it to write the general solution. Although your textbook instructs you to reduce the matrix to reduced echelon form, you can actually still work with the nonreduced echelon matrix. If you decide to use the nonreduced echelon matrix instead, just be sure to express all your basic variables in terms of the free variables only. Section 1.7: Linear Independence Example: Consider the following example 2 3 2 3 2 3 2 3 1 1 2 0 x1415 + x2405 + x34 3 5 = 405 0 2 ▯4 0 This system has only the trivial solution since 2 3 2 3 1 1 2 0 1 1 2 0 4 1 0 3 0 5 ▯ 4 0 1 ▯1 0 5 : 0 2 ▯4 0 0 0 1 0 This means that the vector1 v2, v , 3nd v are linearly independent. De▯nition: Let fv1;v2;:::;p g be a set of vectors in R . If the only solution to the vector equation x1v1+ x2v2+ ▯▯▯ + p p = 0 is the trivial solution given by x = x = ▯▯▯ = x = 0, then the set fv ;v ;:::;v g is 1 2 p 1 2 p said to be linearly independent. On the other hand, if there exists at least one nontrivial solution, then the set is linearly dependent. Example: Determine if the set of vectors 2 3 2 3 2 3 ▯1 3 ▯2 6 4 7 6▯13 7 6 1 7 v1= 4 ▯2 5; v2= 4 7 5 ; v3= 4 9 5 ▯3 7 ▯5 is linearly independent. Solution: We could restate the problem in the form of a question: Can we write the zero vector as a linear combination of v , v , and v where the weights, x , x , and x are not all 1 2 3 1 2 3 zero? If we can, the1 f2 ;3 ;v g would be linearly dependent. If 1o▯2 t3e▯ fv ;v ;v g would be linearly independent. To determine this, let’s set up the augmented matrand reduce it to echelon form. Note th2t 3 ▯1 3 ▯2 0 ▯ A 0 ▯ 6 4 ▯13 1 0 7 = 4 ▯2 7 9 0 5 ▯3 7 ▯5 0 has the echelon form 2 3 1 ▯3 2 0 ▯ ▯ 6 0 1 7 0 7 R 0 = 4 5 0 0 1 0 0 0 0 0 Since every column of the coe▯cient matrix R has a pivot, the equation x 1 1 x 2 2 x v3=30 has only one solution, namel1 x =2x = 3 = 0. Thus, by de▯nition, 1v 2v 3v g is linearly independent. ▯ Linear Independence of Matrix Columns The de▯nition of linear independence and our knowledge of solutions of linear systems bring us to the following fact: Fact: The columns of a matrix A are linearly independent if and only if the equation Ax = 0 has only the trivial solution. Example: Let 2 3 1 2 4 3 A = 41 2 8 7 :5 2 4 8 7 Are the columns of A linearly independent? (In other words, do the columns of A form a linearly independent set?) Solution: In this problem, we claim that the columns of A are linearly dependent. How do we show this? There are three ways that we can esta▯lish t▯is.▯Our ▯rst approach ▯s the usual row reduction technique. Note that the augmented matrix 0 = v1 v2 v 3 v4 0 has the echelon form 2 3 1 2 4 3 0 ▯ ▯ R 0 = 4 0 0 4 4 0 5 0 0 0 1 0 Since at least one of the columns of the coe▯cient matrix R is not a pivot column, the equation x1v1+ x 2 2 x 3 3 x v4=40 has in▯nitely many nontrivial solutions. Thus, fv ;v 1v ;2 g3is 4inearly dependent. Another way that we could determine linear dependence is by simply looking at the columns of A. Sometimes, you are given a matrix in which one of the columns is a multiple of another column. In our example, the second column of A is a multiple of the ▯rst column. You can take advantage of this fact in order to write the zero vector as a linear combination of v , 1 v , v , v in which not all the weights are zero. For instance, we could write 2 3 4 2v 1 v +20v + 03 = 0: 4 (clever move, huh?) Since at least one of the weights is nonzero, fv ;v ;v ;v g is linearly dependent. 1 2 3 4 We can also take a more geometric approach to show linear dependence. Notice that the 3 matrix A has four column vectors in R . Let’s say that three of these vectors are not linear combinations of each other. Then these vectors are enough to span R 3. This would mean that the remaining column vector is a linear combination of the others (i.e. the remaining vector is in the span of the other vectors). In this case, one of the vectors \depends" on the others, thus establishing linear dependence. This situation happens whenever our matrix A contains more columns than rows (which is equivalent to saying that the number of vectors in a set is greater than the number of components of each vector). This idea actually comes from a theorem that will be emphasized in the next set of lecture notes. ▯ Sets of One Vector Fact: A set containing only one vector is linearly independent if and only if that vector is not the zero vector. Let’s think about this fact for a minute and check to see if it makes sense. Let S = fvg be a set containing only one vector v. Assume that v 6= 0. Then the equation x v = 0 has 1 x = 0 (trivial solution) as its only solution which means that S is linearly independent (by 1 de▯nition). However, if v = 0, then the equation x v = 0 1as in▯nitely many nontrivial solutions which means that S is linearly dependent. Sets of Two or More Vectors If there are only two vectors in our set, it is very easy to tell if they are linearly independent. ▯ ▯ ▯ ▯ 1 4 Example: Determine if v = 1 and v 2 are linearly independent. 3 12 Solution: Since v = 2v (i.e1 v is a2multiple of v ), we 1onclude that fv ;v g is 1in2arly dependent. ▯ Theorem: If A is an m ▯ n matrix, with columns a ;:::;a , and if b is in R , the matrix 1 n equation Ax = b has the same solution set as the vector equation x1a1+ x 2 2 ▯▯▯ + x n =nb which, in turn, has the same solution set as the system of linear equations whose augmented matrix is ▯ ▯ a1 a2 ▯▯▯ an b Existence of Solutions Fact: The equation Ax = b has a solution if and only if b is a linear combination of the columns of A. Another way of stating the above fact is to say that Ax = b is consistent if and only if b is in the span of the columns of A. Question: Is the equation Ax = b consistent for every possible b? Example 3 on page 36 of your textbook addresses this question. However, even without looking at this example, we should be convinced that the answer to the question is \no" based on what we have learned about linear combinations of vectors. One thing that we do know is that if every row of A has a pivot, the system Ax = b is consistent. Also, keep in mind that the number of rows of A equals the number of components of each vector in the columns of A. Thus, if A is an m ▯ n matrix (m rows and n columns), then Ax = b is consistent if and only if every vector b in can be written as a linear combination of the m columns of A (i.e. if and only if the columns of A span R ). Theorem: Let A be an m▯n matrix. Then the following statements are logically equivalent. That is, for a particular A, either they are all true statements or they are all false. m a. For each b in R , the equation Ax = b has a solution. m b. Each b in R is a linear combination of the columns of A. c. The columns of A span R . d. A has a pivot position in every row. Caution: The last theorem on the previous page assumes that A is a coe▯cient matrix. It generally does not hold for augmented matrices. Keep in mind that an augmented matrix having a pivot in every row does not guarantee that the corresponding system will have a solution. Example: Consider the matrix ▯ ▯ 2 1 0 A = 6 ▯3 ▯1 Do the columns of A span R ? Solution: To answer this question, we will need to reduce A to echelon form: ▯ ▯ ▯2 1 0 ▯ A = 2 1 0 ▯ 6 ▯3 ▯1 0 ▯6 ▯1 Note that the boxed entries are the pivots. Since each row of the echelon matrix has a pivot, we conclude that the columns of A span R . Example: Consider the matrix 2 3 2 1 ▯3 5 A = 41 4 2 65 0 3 3 3 Do the columns of A span R ? Solution: As in the previous example, we will need to reduce A to echelon form: 2 2 1 ▯3 53 2 3 2 1 4 2 63 1 4 2 6 A = 4 1 4 2 65 ▯ 4 2 1 ▯3 55 ▯ 4 0 ▯7 ▯7 ▯7 5 0 3 3 3 0 3 3 3 0 3 3 3 2 3 2 3 1 4 2 6 1 4 2 6 ▯ 4 0 ▯7 ▯7 ▯7 5 ▯ 4 0 1 1 15 0 0 0 0 0 0 0 0 Since the echelon matrix does not have a pivot in every row, we conclude that the columns of 3 3 A do not span R . In other words, there exists a vector in R that is not a linear combination of the columns of A. De▯nition: The n ▯ n identity matrix, In, has ones on the main diagonal and zeros everywhere else. ▯ ▯ 2 3 1 0 1 0 0 2 = ; 3 = 40 1 0 5 0 1 0 0 1 Note that I has the property that I x = x for all x in R . n n Example: 2 32 3 2 3 1 0 0 2 2 40 1 0 54 5 = 4 55 0 0 1 3 3 | {z }|{z} |{z} I3 x x Properties of the Matrix-Vector Product Ax n Theorem: If A is an m ▯ n matrix, u and v are vectors in R , and c is a scalar, then: (a) A(u + v) = Au + Av (b) A(cu) = c(Au) Properties (a) and (b) given in the above theorem tell us that the matrix A is an example of a linear transformation. We will study linear transformations in Sections 1.8 and 1.9. Section 1.5: Solution Sets of Linear Systems De▯nition: A system of linear equations is said to be homogeneous if it can be written m in the form Ax = 0, where A is an m ▯ n matrix and 0 is the zero vector in R . Note that x = 0 is a solution to Ax = 0, which means that Ax = 0 is always consistent. The solution x = 0 is called the trivial solution. The trivial solution is the least interesting of all solutions. We particularly would like to know if a given system Ax = 0 has at least one nontrivial solution, which is a nonzero solution of the system. Fact: The homogeneous equation Ax = 0 has a nontrivial solution if and only if the equa- tion has at least one free variable. Note that this fact should immediately make sense to us. If the equation does not have at least one free variable, the system would have a unique solution which is the zero vector. The fact also tells us that if one nontrivial solution of the homogeneous equation is known, then the equation must have in▯nitely many nontrivial solutions. (Recall the Existence and Uniqueness Theorem in Section 1.2.) Example: Determine if the equation Ax = 0 has a nontrivial solution, where 2 3 1 3 0 4 A = 42 6 1 10 :5 1 3 1 6 Then describe the solution set. Solution: Let’s set up the associated augmented matrix and reduce it to reduced echelon form: 2 3 2 3 2 3 ▯ ▯ 1 3 0 4 0 1 3 0 4 0 1 3 0 4 0 A 0 = 4 2 6 1 10 05 ▯ 4 0 0 1 2 0 5 ▯ 4 0 0 1 2 0 5 1 3 1 6 0 0 0 ▯1 ▯2 0 0 0 0 0 0 The last augmented matrix is in reduced echelon form. Note that the pivot columns are columns 1 and 3, which means that1x and3x are basic variables. The other two variables, x and x are free variables. Since there is at least one free variable, the system Ax = 0 has 2 4 a nontrivial solution. Let’s now determine the solution set of the system. From the reduced echelon matrix, we have x1+ 3x 2 + 4x4= 0 x3+ 2x 4 0 From here, we solve for each basic variable. Doing this will result in the free variables appearing on the right-hand side of each equation. Thus, we obtain x1= ▯3x 2 4x 4 x3= ▯2x 4 Next, we will express the solution vector, x, so that each of its components is in terms of the free variables, x and x : 2 4 2 3 2 3 x 1 ▯3x2▯ 4x4 6x 2 6 x2 7 x = 4 5 = 4 5 x 3 ▯2x 4 x 4 x4 Let’s split this vector up into two vectors that separate the coe▯c2ents of x from the coe▯cients of4x : 2 3 2 3 2 3 2 3 x1 ▯3x 2 4x 4 ▯3x 2 ▯4x4 6x 7 6 x 7 6 x 7 6 0 7 x = 6 27 = 6 2 7 = 6 2 7 +6 7 4x35 4 ▯2x4 5 4 0 5 4 ▯2x45 x4 x4 0 x4 Our last step is to factor out the variables x and x from each vector: 2 4 2 3 2 3 ▯3 ▯4 6 17 6 07 x = x24 5 + x44 5 (1) 0 ▯2 0 1 | {z } | {z } v1 v2 Equation (1) expresses the solution vector in parametric vector form, where the free variables, x and x , are regarded as parameters. This equation shows that Spanfv ;v g 2 4 1 2 contains all solution vectors, x, of the system Ax = 0. In other words, x is a solution of Ax = 0 if and only if x is in Spa1fv2;v g. Geometrically, the solution set is the plane 4 through the origin spanned 1y v a2d v in R . ▯ Solutions of Nonhomogeneous Systems Example: Describe all solutions of Ax = b where ▯ ▯ ▯ ▯ A = 2 4 and b = 6 ▯4 ▯8 ▯12 Compare these solutions to the solutions of Ax = 0. Solution: Let’s set up the associated augmented matrix and reduce it to reduced echelon form: ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ 2 4 6 1 2 3 1 2 3 A b = ▯4 ▯8 ▯12 ▯ ▯1 ▯2 ▯3 ▯ 0 0 0 The last augmented matrix is in reduced echelon form. Note that the ▯rst column is the only pivot column, which means that1x is a basic variable. The other vari2ble, x , is a free variable. Since there is at least one free variable, the system Ax = b has a nontrivial solution. Let’s now determine the solution set of the system. From the reduced echelon matrix, we have x1+ 2x 2 3: Solving for 1 (the basic variable) yields x1= 3 ▯ 2x 2 Next, we will express the solution vector, x, so that each of its components is in terms of the free variable2 x : ▯x ▯ ▯3 ▯ 2x ▯ x = 1 = 2 x 2 x2 Lastly, we will express x in parametric vector form: ▯ ▯ ▯ ▯ ▯ ▯ ▯ ▯ x1 3 ▯ 2x2 3 ▯2 x = = = + x2 x2 x2 |{z} | {z } p v Geometrically, the solution set of Ax = b is a line through the point (3;0) in R parallel to the vector v. (See Figure 1.) By applying the same procedure for the system Ax = 0, it can be shown that the solution of Ax = 0 is ▯ ▯ ▯ ▯ x = x1 = x ▯2 : x2 2 1 | {z } v Let’s now compare the solutions of Ax = b to the solutions of Ax = 0: ▯ 3 ▯▯2 ▯ Nonhomogeneous: Ax = b =) Solutions: x = + x 2 0 1 |{z} | {z } p v ▯ ▯ ▯2 Homogeneous: Ax = 0 =) Solutions: x = x2 1 | {z } v Notice that the solution set of Ax = b is a shifted version of the solution set of Ax = 0. To obtain the solution set of Ax = b, we take the solution set of Ax = 0 (which is the 2 line through 0 in R spanned by the vector v) and shift it in the direction speci▯ed by the vector p. Since we are only shifting here (there are no re ections, rotations, or nonrigid transformations), the solution set of each system is parallel to each other. (See Figure 1.) ▯ Figure 1: Parallel solution sets of Ax = b and Ax = 0. Theorem: In any dimension, the solution set to Ax = 0 must go through the origin. The solution set to Ax = b is parallel to the solution set to Ax = 0. This theorem tells us that if the solution set of Ax = 0 is a plane through the origin, then the solution set of Ax = b is a di▯erent plane whose location from the origin is determined by the vector p. The two planes never intersect. This idea generalizes to solution sets of higher dimensions. Recall the following de▯nitions: De▯nition: A mapping T : R ! R m is said to be onto Rif each b in R is the image n of at least one x in R . Recall that the range of T is generally a subset of the codomain R . T is onto if and only if the range of T is exactly the same set as the codomain. n m m De▯nition: A mapping T : R ! R is said to be one-to-one if each b iis the image of at most one x in R . The following theorem shows us how we can determine if a linear transformation is one-to- one or onto. n m Theorem: Let T : R ! R be a linear transformation and let A be the standard matrix for T. Then n m m a. T maps R onto R if and only if the columns of A span R . b. T is one-to-one if and only if the columns of A are linearly independent. To see the connection between onto and span and the connection between one-to-one and linear independence, let’s prove this theorem. Keep in mind that T is assumed to be a linear transformation. Therefore, it can be written in the form T(x) = Ax. Proof: n m a. Let’s assume that T : R ! R is onto. Then by de▯nition, there exists at least one vector x in R such that T(x) = b for every vector b in R . In other words, the system Ax = b is consistent for all b in R , which is true if and only if the columns of A span m R (Recall this fact from a theorem in Section 1.4). Therefore, T is onto if and only if the columns of A span R . b. Let’s ▯rst assume that T is one-to-one. Then by de▯nition, T(x) = b has at most one solution x in R for every b in R . Since T is linear, T(0) = 0. Therefore, the equation T(x) = 0 has only the trivial solution. Remember that this means that the columns of A are linearly independent. Conversely, assume that the columns of A are linearly independent. Under this assumption, we want to show that T is one-to-one. Suppose that u and v are vectors in R that map to the same vector in R . In other words, T(u) = T(v), which is equivalent to T(u) ▯ T(v) = 0. Since T is linear, we have T(u) ▯ T(v) = 0 =) T(u ▯ v) = 0 Since we assumed that T(x) = 0 has only the trivial solution, we must have u▯v = 0, m or equivalently, u = v. This shows that for every b in R , there cannot be more than one x in R that maps to b. By de▯nition, this means that T is one-to-one. ▯ Section 2.1: Matrix Operations A matrix, as we know, is a rectangular array of elements (numbers). An m ▯ n matrix has the form 2 3 a11 a12 a13 ▯▯▯ a1n 6 a21 a22 a23 ▯▯▯ a2n7 A = 6 . . . . . 7 4 . . . . . 5 a a a ▯▯▯ a m 1 m2 m3 mn Notes about the matrix A: th th ▯ The entry of A in the i row and j column is denoted by aij ▯ The diagonal entries of A are a11 a22 a33 :::. ▯ The main diagonal of A is the part of the matrix that only consists of its diagonal entries. ▯ If m = n (the number of rows is the same as the number of columns), then A is called a square matrix or an n ▯ n matrix. ▯ If A is a square matrix whose nondiagonal entries are zero, then A is called a diagonal matrix. Example: Consider the following matrices: 2 3 3 ▯2 5 4 2 3 6 6 7 ▯4 9 7 2 0 0 A = 4 5 B = 4 0 5 0 5 ▯9 1 8 5 0 0 3 2 6 0 7 Both A and B are square matrices, but only B is a diagonal matrix. Addition and Scalar Multiplication Note: Given two matrices, A and B, their sum A + B is de▯ned if and only if the size of A and the size of B are the same. If r is any scalar, then the matrix rA is obtained by multiplying every entry of A by r. Example: Let 2 3 2 3 4 ▯1 3 ▯1 A =4 2 ▯35 and B = 4 5 05 : 7 0 0 2 Find 3A and 3A + 2B. Solution: To obtain the matrix 3A, multiply each entry of A by 3: 2 3 12 ▯3 4 5 3A = 6 ▯9 21 0 Note that 2 3 6 ▯2 2B = 410 05: 0 4 Therefore, 2 3 2 3 2 3 12 ▯3 6 ▯2 18 ▯5 3A + 2B =4 6 ▯95 + 410 05 = 4 16 ▯95 21 0 0 4 21 4 ▯ Theorem: Let A, B, and C be matrices of the same size, and let r and s be scalars. Then a. A + B = B + A d. r(A + B) = rA + rB b. (A + B) + C = A + (B + C) e. (r + s)A = rA + sA c. A + 0 = A f. r(sA) = (rs)A Matrix Multiplication De▯nition: If A is an m ▯ n matrix, and if B is an n ▯ p matrix with columns b ;:::;b , 1 p then the product AB is the m ▯ p matrix whose columns1are Ab p:::;Ab . That is, ▯ ▯ ▯ ▯ AB = A b1 b2 ▯▯▯ bp = Ab 1 Ab 2 ▯▯▯ Ab p Example: Let ▯ ▯ ▯ ▯ 3 1 ▯1 0 2 A = ▯2 0 and B = 4 ▯3 ▯1 Compute AB and BA if they exist. Solution: Let’s check ▯rst if AB is de▯ned. Let’s put the sizes of A and B together and ▯nd out: (2 ▯ 2) (2 ▯ 3) =) (2 ▯ 3) | {z } | {z } | {z } size ofsize of B size of AB Since the inner dimensions are equal (i.e. the number of columns of A is the same as the number of rows of B), AB is de▯ned and its size is determined by the outer dimensions. The columns of B are denoted by b , b , and b respectively. Therefore, 1 2 3 ▯ ▯▯ ▯ ▯ ▯ Ab1= 3 1 ▯1 = 1 ▯2 0 4 2 ▯ ▯▯ ▯ ▯ ▯ 3 1 0 ▯3 Ab2= = ▯2 0 ▯3 0 ▯ ▯▯ ▯ ▯ ▯ 3 1 2 5 Ab3= ▯2 0 ▯1 = ▯4 Therefore, ▯ ▯ ▯ ▯ 1 ▯3 5 AB = Ab 1 Ab 2 Ab 3 = 2 0 ▯4 The product BA is not de▯ned since the number of columns of B is not the same as the number of rows of A. Note that this example shows that given two matrices A and B, the products AB and BA are generally not equal. ▯ Row-Column Rule For Computing

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.