 2.5.1: The following sequences are linearly convergent. Generate the first...
 2.5.2: Consider the function f (x) = e6x+3(ln 2)2e2x(ln 8)e4x(ln 2)3. Use ...
 2.5.3: Let g(x) = cos(x 1) and p(0) 0 = 2. Use Steffensens method to find ...
 2.5.4: Let g(x) = 1 + (sin x)2 and p(0) 0 = 1. Use Steffensens method to f...
 2.5.5: Steffensens method is applied to a function g(x) using p(0) 0 = 1 a...
 2.5.6: Steffensens method is applied to a function g(x) using p(0) 0 = 1 a...
 2.5.7: Use Steffensens method to find, to an accuracy of 104, the root of ...
 2.5.8: Use Steffensens method to find, to an accuracy of 104, the root of ...
 2.5.9: Use Steffensens method with p0 = 2 to compute an approximation to 3...
 2.5.10: Use Steffensens method with p0 = 3 to compute an approximation to 3...
 2.5.11: Use Steffensens method to approximate the solutions of the followin...
 2.5.12: Use Steffensens method to approximate the solutions of the followin...
 2.5.13: The following sequences converge to 0. Use Aitkens 2 method to gene...
 2.5.14: A sequence { pn} is said to be superlinearly convergent to p if lim...
 2.5.15: Suppose that { pn} is superlinearly convergent to p. Show that lim ...
 2.5.16: Prove Theorem 2.14. [Hint: Let n = ( pn+1 p)/( pn p) , and show tha...
 2.5.17: Let Pn(x) be the nth Taylor polynomial for f (x) = ex expanded abou...
Solutions for Chapter 2.5: Accelerating Convergence
Full solutions for Numerical Analysis  9th Edition
ISBN: 9780538733519
Solutions for Chapter 2.5: Accelerating Convergence
Get Full SolutionsNumerical Analysis was written by and is associated to the ISBN: 9780538733519. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 9. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 2.5: Accelerating Convergence includes 17 full stepbystep solutions. Since 17 problems in chapter 2.5: Accelerating Convergence have been answered, more than 12890 students have viewed full stepbystep solutions from this chapter.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or  sign.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Singular matrix A.
A square matrix that has no inverse: det(A) = o.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.