- 7.7.1: Find the solution x to the least squares problem, given that A = QR...
- 7.7.2: Let A = D E = d1 d2 . . . dn e1 e2 . . . en and b = b1 b2 ... b2n U...
- 7.7.3: Let A = 1 2 1 3 1 2 1 1 , b = 3 10 3 6 (a) Use Householder transfor...
- 7.7.4: Let A = 1 1 _ 0 0 _ where _ is a small scalar. (a) Determine the si...
- 7.7.5: Show that the pseudoinverse A+ satisfies the four Penrose conditions.
- 7.7.6: Let B be any matrix that satisfies Penrose conditions 1 and 3, and ...
- 7.7.7: If x Rm, we can think of x as an m 1 matrix. If x _= 0, we can then...
- 7.7.8: Show that if A is a m n matrix of rank n, then A+ = (ATA) 1AT .
- 7.7.9: Let A be an m n matrix and let b Rm. Show that b R(A) if and only i...
- 7.7.10: Let A be an m n matrix with singular value decomposition U_V T , an...
- 7.7.11: Let A = 1 1 1 1 0 0 Determine A+ and verify that A and A+ satisfy t...
- 7.7.12: Let A = 1 2 1 2 and b = 6 4 (a) Compute the singular value decompos...
- 7.7.13: Show each of the following: (a) (A+ ) + = A (b) (AA+ )2 = AA+ (c) (...
- 7.7.14: Let A1 = U_1V T and A2 = U_2V T , where _1 = 1 . . . r1 0 . . . 0 a...
- 7.7.15: Let A = XYT , where X is an m r matrix, Y T is an r n matrix, and X...
Solutions for Chapter 7.7: Least Squares Problems
Full solutions for Linear Algebra with Applications | 8th Edition
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
A = CTC = (L.J]))(L.J]))T for positive definite A.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).
Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.