- 7.6.1: Let A = 1 1 1 1 (a) Apply one iteration of the power method to A, w...
- 7.6.2: Let A = 2 1 0 1 3 1 0 1 2 and u0 = 1 1 1 (a) Apply the power method...
- 7.6.3: Let A = 1 2 1 1 and u0 = 1 1 (a) Compute u1, u2, u3, and u4, using ...
- 7.6.4: Let A = A1 = 1 1 1 3 Compute A2 and A3, using the QR algorithm. Com...
- 7.6.5: Let A = 5 2 2 2 1 2 3 4 2 (a) Verify that 1 = 4 is an eigenvalue of...
- 7.6.6: Let A be an n n matrix with distinct real eigenvalues 1, 2, . . . ,...
- 7.6.7: Let x = (x1, . . . , xn)T be an eigenvector of A belonging to . Sho...
- 7.6.8: Let be an eigenvalue of an n n matrix A. Show that | aj j| _n i=1 i...
- 7.6.9: Let A be a matrix with eigenvalues 1, . . . , n and let be an eigen...
- 7.6.10: Let Ak = Qk Rk , k = 1, 2, . . . be the sequence of matrices derive...
- 7.6.11: Let Pk and Uk be defined as in Exercise 10. Show that (a) Pk+1Uk+1 ...
- 7.6.12: Let Rk be a k k upper triangular matrix and suppose that RkUk = UkD...
- 7.6.13: Let R be an n n upper triangular matrix whose diagonal entries are ...
Solutions for Chapter 7.6: The Eigenvalue Problem
Full solutions for Linear Algebra with Applications | 8th Edition
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
A = CTC = (L.J]))(L.J]))T for positive definite A.
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
A sequence of steps intended to approach the desired solution.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Every v in V is orthogonal to every w in W.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).