- 3.1.1: For the given functions f (x), let x0 = 0, x1 = 0.6, and x2 = 0.9. ...
- 3.1.2: For the given functions f (x), let x0 = 1, x1 = 1.25, and x2 = 1.6....
- 3.1.3: Use Theorem 3.3 to find an error bound for the approximations in Ex...
- 3.1.4: Use Theorem 3.3 to find an error bound for the approximations in Ex...
- 3.1.5: Use appropriate Lagrange interpolating polynomials of degrees one, ...
- 3.1.6: Use appropriate Lagrange interpolating polynomials of degrees one, ...
- 3.1.7: The data for Exercise 5 were generated using the following function...
- 3.1.8: The data for Exercise 6 were generated using the following function...
- 3.1.9: Let P3(x) be the interpolating polynomial for the data (0, 0), (0.5...
- 3.1.10: Let f (x) = x x2 and P2(x) be the interpolation polynomial on x0 = ...
- 3.1.11: Use the following values and four-digit rounding arithmetic to cons...
- 3.1.12: Use the Lagrange interpolating polynomial of degree three or less a...
- 3.1.13: Construct the Lagrange interpolating polynomials for the following ...
- 3.1.14: Let f (x) = ex , for 0 x 2. a. Approximate f (0.25) using linear in...
- 3.1.15: Repeat Exercise 11 using Maple with Digits set to 10.
- 3.1.16: Repeat Exercise 12 using Maple with Digits set to 10.
- 3.1.17: Suppose you need to construct eight-decimal-place tables for the co...
- 3.1.18: a. The introduction to this chapter included a table listing the po...
- 3.1.19: It is suspected that the high amounts of tannin in mature oak leave...
- 3.1.20: In Exercise 26 of Section 1.1 a Maclaurin series was integrated to ...
- 3.1.21: Prove Taylors Theorem 1.14 by following the procedure in the proof ...
- 3.1.22: Show that max xjxxj+1 |g(x)| = h2 /4, where g(x) = (x jh)(x (j + 1)h).
- 3.1.23: The Bernstein polynomial of degree n for f C[0, 1] is given by Bn(x...
Solutions for Chapter 3.1: Interpolation and the Lagrange Polynomial
Full solutions for Numerical Analysis | 9th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
peA) = det(A - AI) has peA) = zero matrix.
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Outer product uv T
= column times row = rank one matrix.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.