 7.5.1: Compute the condition numbers ofthe following matrices relative to ...
 7.5.2: Compute the condition numbers ofthe following matrices relative to ...
 7.5.3: The following linear systems Ax = b have x as the actual solution a...
 7.5.4: The following linear systems Ax = b have x as the actual solution a...
 7.5.5: (i) Use Gaussian elimination and threedigit rounding arithmetic to...
 7.5.6: Repeat Exercise 5 using fourdigit rounding arithmetic.
 7.5.7: The linear system d. TTX 7r2X ^X4 = v / 2. 1 2 A'l 3 1.0001 2 . X...
 7.5.8: The linear system Ax = b given by 1 2 " *1 3 1.00001 2 . X 2 3.0000...
 7.5.9: The n x n Hilbert matrix (see page 519) defined by HP = I 71' I < ...
 7.5.10: Use fourdigit rounding arithmetic to compute the inverse H~l of th...
 7.5.11: Show that if B is singular, then I IIA Sli < . K{A) [Hint: There ex...
 7.5.12: Using Exercise 11, estimate the condition numbers for the following...
Solutions for Chapter 7.5: Error Bounds and Iterative Refinement
Full solutions for Numerical Analysis  10th Edition
ISBN: 9781305253667
Solutions for Chapter 7.5: Error Bounds and Iterative Refinement
Get Full SolutionsSince 12 problems in chapter 7.5: Error Bounds and Iterative Refinement have been answered, more than 12978 students have viewed full stepbystep solutions from this chapter. Chapter 7.5: Error Bounds and Iterative Refinement includes 12 full stepbystep solutions. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 10. Numerical Analysis was written by and is associated to the ISBN: 9781305253667. This expansive textbook survival guide covers the following chapters and their solutions.

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A  AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Outer product uv T
= column times row = rank one matrix.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(DÂ» O.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.