 1.3.1: a. Use threedigit chopping arithmetic to compute the sum 10 i=1(1/...
 1.3.2: The number e is defined by e = n=0(1/n!), where n! = n(n 1) 2 1 for...
 1.3.3: The Maclaurin series for the arctangent function converges for 1 < ...
 1.3.4: Exercise 3 details a rather inefficient means of obtaining an appro...
 1.3.5: Another formula for computing can be deduced from the identity /4 =...
 1.3.6: Find the rates of convergence of the following sequences as n . a. ...
 1.3.7: Find the rates of convergence of the following functions as h 0. a....
 1.3.8: a. How many multiplications and additions are required to determine...
 1.3.9: Let P(x) = anxn + an1xn1 ++ a1x + a0 be a polynomial, and let x0 be...
 1.3.10: Equations (1.2) and (1.3) in Section 1.2 give alternative formulas ...
 1.3.11: Construct an algorithm that has as input an integer n 1, numbers x0...
 1.3.12: Assume that 1 2x 1 x + x2 + 2x 4x3 1 x2 + x4 + 4x3 8x7 1 x4 + x8 +=...
 1.3.13: a. Suppose that 0 < q < p and that n = + O np . Show that n = + O n...
 1.3.14: a. Suppose that 0 < q < p and that F(h) = L + O (hp). Show that F(h...
 1.3.15: Suppose that as x approaches zero, F1(x) = L1 + O(x) and F2(x) = L2...
 1.3.16: The sequence {Fn} described by F0 = 1, F1 = 1, and Fn+2 = Fn+Fn+1, ...
 1.3.17: The Fibonacci sequence also satisfies the equation Fn F n = 1 5 1 +...
 1.3.18: The harmonic series 1 + 1 2 + 1 3 + 1 4 + diverges, but the sequenc...
Solutions for Chapter 1.3: Algorithms and Convergence
Full solutions for Numerical Analysis  9th Edition
ISBN: 9780538733519
Solutions for Chapter 1.3: Algorithms and Convergence
Get Full SolutionsSince 18 problems in chapter 1.3: Algorithms and Convergence have been answered, more than 15706 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 9. This expansive textbook survival guide covers the following chapters and their solutions. Numerical Analysis was written by and is associated to the ISBN: 9780538733519. Chapter 1.3: Algorithms and Convergence includes 18 full stepbystep solutions.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Cofactor Cij.
Remove row i and column j; multiply the determinant by (I)i + j •

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Outer product uv T
= column times row = rank one matrix.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.