 1.5.1: Which of the matrices that follow are elementary matrices? Classify...
 1.5.2: Find the inverse of each matrix in Exercise 1. For each elementary ...
 1.5.3: For each of the following pairs of matrices, find an elementary mat...
 1.5.4: For each of the following pairs of matrices, find an elementary mat...
 1.5.5: Let A = 1 2 4 2 1 3 1 0 2 , B = 1 2 4 2 1 3 2 2 6 , C = 1 2 4 0 1 3...
 1.5.6: Let A = 2 1 1 6 4 5 4 1 3 (a) Find elementary matrices E1, E2, E3 s...
 1.5.7: Let A = 2 1 6 4 (a) Express A as a product of elementary matrices. ...
 1.5.8: Compute the LU factorization of each of the following matrices: (a)...
 1.5.9: Let A = 1 0 1 3 3 4 2 2 3 (a) Verify that A1 = 1 2 3 1 1 1 0 2 3 (b...
 1.5.10: Find the inverse of each of the following matrices: (a) 1 1 1 0 (b)...
 1.5.11: Given A = 3 1 5 2 and B = 1 2 3 4 compute A1 and use it to (a) find...
 1.5.12: Let A = 5 3 3 2 , B = 6 2 2 4 ,C = 4 2 6 3 Solve each of the follow...
 1.5.13: Is the transpose of an elementary matrix an elementary matrix of th...
 1.5.14: Let U and R be nn upper triangular matrices and set T = UR. Show th...
 1.5.15: Let A be a 3 3 matrix and suppose that 2a1 + a2 4a3 = 0 How many so...
 1.5.16: Let A be a 3 3 matrix and suppose that a1 = 3a2 2a3 Will the system...
 1.5.17: Let A and B be n n matrices and let C = A B. Show that if Ax0 = Bx0...
 1.5.18: Let A and B be n n matrices and let C = AB. Prove that if B is sing...
 1.5.19: Let U be an n n upper triangular matrix with nonzero diagonal entri...
 1.5.20: Let A be a nonsingular nn matrix and let B be an n r matrix. Show t...
 1.5.21: In general, matrix multiplication is not commutative (i.e., AB _= B...
 1.5.22: Show that if A is a symmetric nonsingular matrix, then A1 is also s...
 1.5.23: Prove that if A is row equivalent to B, then B is row equivalent to...
 1.5.24: (a) Prove that if A is row equivalent to B and B is row equivalent ...
 1.5.25: Let A and B be m n matrices. Prove that if B is row equivalent to A...
 1.5.26: Prove that B is row equivalent to A if and only if there exists a n...
 1.5.27: Is it possible for a singular matrix B to be row equivalent to a no...
 1.5.28: Given a vector x Rn+1, the (n + 1) (n + 1) matrix V defined by vi j...
 1.5.29: If A is row equivalent to I and AB = AC, then B must equal C. 3
 1.5.30: If E and F are elementary matrices and G = EF, then G is nonsingula...
 1.5.31: If A is a 4 4 matrix and a1 +a2 = a3 +2a4, then A must be singular. 3
 1.5.32: If A is row equivalent to both B and C, then A is row equivalent to...
Solutions for Chapter 1.5: Elementary Matrices
Full solutions for Linear Algebra with Applications  8th Edition
ISBN: 9780136009290
Solutions for Chapter 1.5: Elementary Matrices
Get Full SolutionsSince 32 problems in chapter 1.5: Elementary Matrices have been answered, more than 6509 students have viewed full stepbystep solutions from this chapter. Chapter 1.5: Elementary Matrices includes 32 full stepbystep solutions. This textbook survival guide was created for the textbook: Linear Algebra with Applications, edition: 8. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra with Applications was written by and is associated to the ISBN: 9780136009290.

Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or  sign.

Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn  l . Cx = convolution c * x. Eigenvectors in F.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Outer product uv T
= column times row = rank one matrix.

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.