 6.4.1: In Exercises 12, find the associated normal equation. 1 1 2 3 4 5 x...
 6.4.2: In Exercises 12, find the associated normal equation. 2 1 0 312 145...
 6.4.3: In Exercises 36, find the least squares solution of the equation Ax...
 6.4.4: In Exercises 36, find the least squares solution of the equation Ax...
 6.4.5: In Exercises 36, find the least squares solution of the equation Ax...
 6.4.6: In Exercises 36, find the least squares solution of the equation Ax...
 6.4.7: In Exercises 710, find the least squares error vector and least squ...
 6.4.8: In Exercises 710, find the least squares error vector and least squ...
 6.4.9: In Exercises 710, find the least squares error vector and least squ...
 6.4.10: In Exercises 710, find the least squares error vector and least squ...
 6.4.11: In Exercises 1114, find parametric equations for all least squares ...
 6.4.12: In Exercises 1114, find parametric equations for all least squares ...
 6.4.13: In Exercises 1114, find parametric equations for all least squares ...
 6.4.14: In Exercises 1114, find parametric equations for all least squares ...
 6.4.15: In Exercises 1516, use Theorem 6.4.2 to find the orthogonal project...
 6.4.16: In Exercises 1516, use Theorem 6.4.2 to find the orthogonal project...
 6.4.17: Find the orthogonal projection of u on the subspace of R3 spanned b...
 6.4.18: Find the orthogonal projection of u on the subspace of R4 spanned b...
 6.4.19: In Exercises 1920, use the method of Example 3 to find the standard...
 6.4.20: In Exercises 1920, use the method of Example 3 to find the standard...
 6.4.21: In Exercises 2122, use the method of Example 3 to find the standard...
 6.4.22: In Exercises 2122, use the method of Example 3 to find the standard...
 6.4.23: In Exercises 2324, a QRfactorization of A is given. Use it to find...
 6.4.24: In Exercises 2324, a QRfactorization of A is given. Use it to find...
 6.4.25: Let W be the plane with equation 5x 3y + z = 0. (a) Find a basis fo...
 6.4.26: Let W be the line with parametric equations x = 2t, y = t, z = 4t (...
 6.4.27: Find the orthogonal projection of u = (5, 6, 7, 2) on the solution ...
 6.4.28: Show that if w = (a, b, c) is a nonzero vector, then the standard m...
 6.4.29: Let A be an m n matrix with linearly independent row vectors. Find ...
 6.4.30: Prove: If A has linearly independent column vectors, and if Ax = b ...
 6.4.31: Prove: If A has linearly independent column vectors, and if b is or...
 6.4.32: Prove the implication (b) (a) of Theorem 6.4.3.
 6.4.TF: TF. In parts (a)(h) determine whether the statement is true or fals...
 6.4.T1: (a) Use Theorem 6.4.4 to show that the following linear system has ...
Solutions for Chapter 6.4: Best Approximation; Least Squares
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version  11th Edition
ISBN: 9781118474228
Solutions for Chapter 6.4: Best Approximation; Least Squares
Get Full SolutionsSince 34 problems in chapter 6.4: Best Approximation; Least Squares have been answered, more than 15985 students have viewed full stepbystep solutions from this chapter. Elementary Linear Algebra, Binder Ready Version: Applications Version was written by and is associated to the ISBN: 9781118474228. Chapter 6.4: Best Approximation; Least Squares includes 34 full stepbystep solutions. This textbook survival guide was created for the textbook: Elementary Linear Algebra, Binder Ready Version: Applications Version, edition: 11. This expansive textbook survival guide covers the following chapters and their solutions.

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Column space C (A) =
space of all combinations of the columns of A.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.