 1.1.1E: Solve each system in Exercises 1–4 by using elementary row operatio...
 1.1.2E: Solve each system in Exercises 1–4 by using elementary row operatio...
 1.1.3E: Solve each system in Exercises 1–4 by using elementary row operatio...
 1.1.4E: Find the point of intersection of the lines x1 ? 5x2 = 1 and 3x1 ? ...
 1.1.5E: Consider each matrix in Exercises 5 and 6 as the augmented matrix o...
 1.1.6E: Consider each matrix in Exercises 5 and 6 as the augmented matrix o...
 1.1.7E: In Exercises 7–10, the augmented matrix of a linear system has been...
 1.1.8E: In Exercises 7–10, the augmented matrix of a linear system has been...
 1.1.9E: In Exercises 7–10, the augmented matrix of a linear system has been...
 1.1.10E: In Exercises 7–10, the augmented matrix of a linear system has been...
 1.1.11E: Solve the systems in Exercises
 1.1.12E: Solve the systems in Exercises 11–14.x1 – 5x2 + 4x3 = –32x1 – 7x2 +...
 1.1.13E: Solve the systems in Exercises 11–14.x1 –3x3 = 82x1 + 2x2 + 9x3 = 7...
 1.1.14E: Solve the systems in Exercises
 1.1.15E: Determine if the systems are consistent. Do not completely solve th...
 1.1.16E: Determine if the systems are consistent. Do not completely solve th...
 1.1.17E: Do the three lines x1 ? 4x2 = 1, 2x1 ? x2 = ?3, and x1 ? 3x2 = 4 ha...
 1.1.18E: Do the three planes x1 + 2x2 + x3 = 4, x2 ? x3 = 1, and x1 + 3x2 = ...
 1.1.19E: In Exercises 19–22, determine the value(s) of h such that the matri...
 1.1.20E: Determine the value(s) of h such that the matrix is the augmented m...
 1.1.21E: Determine the value(s) of h such that the matrix is the augmented m...
 1.1.22E: Determine the value(s) of h such that the matrix is the augmented m...
 1.1.23E: In Exercises 23 and 24, key statements from this section are either...
 1.1.24E: a. Elementary row operations on an augmented matrix never change th...
 1.1.25E: Find an equation involving g, h, and k that makes this augmented ma...
 1.1.26E: Construct three different augmented matrices for linear systems who...
 1.1.27E: Suppose the system below is consistent for all possible values of f...
 1.1.28E: Suppose a, b, c, and d are constants such that a is not zero and th...
 1.1.29E: Find the elementary row operation that transforms the first matrix ...
 1.1.30E: Find the elementary row operation that transforms the first matrix ...
 1.1.31E: In Exercises 29–32, find the elementary row operation that transfor...
 1.1.32E: Find the elementary row operation that transforms the first matrix ...
 1.1.33E: An important concern in the study of heat transfer is to determine ...
 1.1.34E: An important concern in the study of heat transfer is to determine ...
Solutions for Chapter 1.1: Linear Algebra and Its Applications 5th Edition
Full solutions for Linear Algebra and Its Applications  5th Edition
ISBN: 9780321982384
Solutions for Chapter 1.1
Get Full SolutionsChapter 1.1 includes 34 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra and Its Applications was written by and is associated to the ISBN: 9780321982384. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications , edition: 5. Since 34 problems in chapter 1.1 have been answered, more than 46812 students have viewed full stepbystep solutions from this chapter.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn  l . Cx = convolution c * x. Eigenvectors in F.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Nullspace matrix N.
The columns of N are the n  r special solutions to As = O.

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Reflection matrix (Householder) Q = I 2uuT.
Unit vector u is reflected to Qu = u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q1 = Q.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·