 5.SE.1E: Mark each statement as True or False. Justify each answer.a. If A i...
 5.SE.2E: Show that if x is an eigenvector of the matrix product AB and Bx ? ...
 5.SE.3E: Suppose x is an eigenvector of A corresponding to an eigenvalue a. ...
 5.SE.4E: Use mathematical induction to show that if is an eigenvalue of Am, ...
 5.SE.5E: If p (t) = c0 + c1t + c22 + + ct, define p (A)to be the matrix form...
 5.SE.6E: a. Let B = 5I 2. Show that B is diagonalizable by finding a suitabl...
 5.SE.7E: Suppose A is diagonalizable and p (t) is the characteristic polynom...
 5.SE.8E: a. Let A be a diagonalizable n × n matrix. Show that if the multipl...
 5.SE.9E: Show that I – A is invertible when all the eigenvalues of A are les...
 5.SE.10E: Show that if A is diagonalizable, with all eigenvalues less than 1 ...
 5.SE.11E: Let u be an eigenvector of A corresponding to an eigenvalue and let...
 5.SE.12E: Use formula (1) for the determinant in Section 5.2 to explain why d...
 5.SE.13E: Use Exercise 12 to find the eigenvalues of the matrices in Exercise...
 5.SE.14E: Use Exercise 12 to find the eigenvalues of the matrices in Exercise...
 5.SE.15E: Let J be the n × n matrix of all 1’s, and consider Use the results ...
 5.SE.16E: Apply the result of Exercise 15 to find the eigenvalues of the matr...
 5.SE.17E: Let Recall from Exercise 25 in Section 5.4 that trA (the trace of A...
 5.SE.18E:
 5.SE.19E: Exercises 19–23 concern the polynomialp (t) = 0 + a1t + + an + tnan...
 5.SE.20E: Exercises 19–23 concern the polynomialp( t ) = a0 + a1t + + an1tn...
 5.SE.21E: Exercises 19–23 concern the polynomialp (t) = a0 + a1t + + an1tn1...
 5.SE.22E: Exercises 19–23 concern the polynomial and an n × n matrix Cp calle...
 5.SE.23E: Exercises 19–23 concern the polynomial and an n × n matrix Cp calle...
 5.SE.24E: [M] The MATLAB command roots(p) computes the roots of the polynomia...
 5.SE.25E: [M] Use a matrix program to diagonalize if possible. Use the eigenv...
 5.SE.26E: [M] Repeat Exercise 25 for Reference:[M] Use a matrix program to di...
Solutions for Chapter 5.SE: Linear Algebra and Its Applications 4th Edition
Full solutions for Linear Algebra and Its Applications  4th Edition
ISBN: 9780321385178
Solutions for Chapter 5.SE
Get Full SolutionsSince 26 problems in chapter 5.SE have been answered, more than 32240 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications, edition: 4. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra and Its Applications was written by and is associated to the ISBN: 9780321385178. Chapter 5.SE includes 26 full stepbystep solutions.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Outer product uv T
= column times row = rank one matrix.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Unitary matrix UH = U T = UI.
Orthonormal columns (complex analog of Q).