- 9.1.1: Use the method of Example 1 and the LU-decomposition 3 6 2 5 = 3 0 ...
- 9.1.2: Use the method of Example 1 and the LU-decomposition 3 6 3 206 474 ...
- 9.1.3: In Exercises 36, find an LU-decomposition of the coefficient matrix...
- 9.1.4: In Exercises 36, find an LU-decomposition of the coefficient matrix...
- 9.1.5: In Exercises 36, find an LU-decomposition of the coefficient matrix...
- 9.1.6: In Exercises 36, find an LU-decomposition of the coefficient matrix...
- 9.1.7: In Exercises 78, an LU-decomposition of a matrix A is given. (a) Co...
- 9.1.8: In Exercises 78, an LU-decomposition of a matrix A is given. (a) Co...
- 9.1.9: Let A = 2 1 1 2 1 2 210 (a) Find an LU-decomposition of A. (b) Expr...
- 9.1.10: (a) Show that the matrix 0 1 1 0 has no LU-decomposition. (b) Find ...
- 9.1.11: In Exercises 1112, use the given PLU-decomposition of A to solve th...
- 9.1.12: In Exercises 1112, use the given PLU-decomposition of A to solve th...
- 9.1.13: In Exercises 1314, find the LDU-decomposition of A. A = 2 2 4 1
- 9.1.14: In Exercises 1314, find the LDU-decomposition of A. A = 3 12 6 020 ...
- 9.1.15: In Exercises 1516, find a PLU-decomposition of A, and use it to sol...
- 9.1.16: In Exercises 1516, find a PLU-decomposition of A, and use it to sol...
- 9.1.17: Let Ax = b be a linear system of n equations in n unknowns, and ass...
- 9.1.18: Let A = a b c d (a) Prove: If a = 0, then the matrix A has a unique...
- 9.1.19: Prove: If A is any n n matrix, then A can be factored as A = PLU, w...
- 9.1.TF: TF. In parts (a)(e) determine whether the statement is true or fals...
- 9.1.T1: Technology utilities vary in how they handle LU-decompositions. For...
- 9.1.T2: The accompanying figure shows a metal plate whose edges are held at...
Solutions for Chapter 9.1: LU-Decompositions
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version | 11th Edition
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
Remove row i and column j; multiply the determinant by (-I)i + j •
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
A symmetric matrix with eigenvalues of both signs (+ and - ).
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
A sequence of steps intended to approach the desired solution.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!