- 3.2.1: Use Neville's method to obtain the approximations for Lagrange inte...
- 3.2.2: Use Neville's method to obtain the approximations for Lagrange inte...
- 3.2.3: Use Neville's method to approximate \/3 with the following function...
- 3.2.4: Let Pji(x) be the interpolating polynomial for the data (0, 0), (0....
- 3.2.5: Neville's method is used to approximate /(0.4), giving the followin...
- 3.2.6: Neville's method is used to approximate /(0.5), giving the followin...
- 3.2.7: Suppose X/ = 7, for 7 = 0, 1, 2, 3, and it is known that Po,i(x) = ...
- 3.2.8: Suppose xy = 7, for 7 = 0, 1, 2, 3, and it is known that Po.1(x) = ...
- 3.2.9: Neville's Algorithm is used to approximate /(0) using /(2), /(I), /...
- 3.2.10: Neville's Algorithm is used to approximate /(G) using /(2), /(I), /...
- 3.2.11: Construct a sequence of interpolating values y to /(I + VlO), where...
- 3.2.12: Use iterated inverse interpolation to find an approximation to the ...
- 3.2.13: Construct an algorithm that can be used for inverse interpolation.
Solutions for Chapter 3.2: Data Approximation and Neville's Method
Full solutions for Numerical Analysis | 10th Edition
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Invert A by row operations on [A I] to reach [I A-I].
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.