 2.5.1: The following sequences are linearly convergent. Generate the first...
 2.5.2: Consider the function /(a) = e 6 * + 3(ln2)2 e 2  t  (In 8)^  (l...
 2.5.3: Let g(A) = cos(a I) and po"' = 2. Use Steffensen's method to find pi,"
 2.5.4: Let g(A) = 1 ( (sinx)2 and pll" = 1. Use Steffensen's method to f...
 2.5.5: Steffensen's method is applied to a function g(x) using Pq'} I and ...
 2.5.6: Steffensen's method is applied to a function g(x) using Pq01  I an...
 2.5.7: Use Steffensen's method to find, to an accuracy of 104 , the root ...
 2.5.8: Use Steffensen's method to find, to an accuracy of 104 , the root ...
 2.5.9: Use Steffensen's method with po = 2 to compute an approximation to ...
 2.5.10: Use Steffensen's method with p0 = 3 to compute an approximation to ...
 2.5.11: Use Steffensen's method to approximate the solutions ofthe followin...
 2.5.12: Use Steffensen's method to approximate the solutions ofthe followin...
 2.5.13: The followingsequencesconvergetoO. Use Aitken's A 2 method togenera...
 2.5.14: A sequence {p) is said to be superlinearly convergentto p if .. \Pn...
 2.5.15: Suppose that {p) is superlinearly convergent to p. Show that .. \Pn...
 2.5.16: Prove Theorem 2.14. [Hint: Let = (p+i p)/(Pn p) A and show thatlini...
 2.5.17: Let Pn(x) be the nth Taylor polynomial for /(x) = e x expanded abou...
Solutions for Chapter 2.5: Accelerating Convergence
Full solutions for Numerical Analysis  10th Edition
ISBN: 9781305253667
Solutions for Chapter 2.5: Accelerating Convergence
Get Full SolutionsThis textbook survival guide was created for the textbook: Numerical Analysis, edition: 10. Since 17 problems in chapter 2.5: Accelerating Convergence have been answered, more than 15247 students have viewed full stepbystep solutions from this chapter. Numerical Analysis was written by and is associated to the ISBN: 9781305253667. Chapter 2.5: Accelerating Convergence includes 17 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Iterative method.
A sequence of steps intended to approach the desired solution.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.