- 4.6.1: Compute the Simpsons rule approximations S(a, b), S(a,(a + b)/2), a...
- 4.6.2: Use Adaptive quadrature to find approximations to within 103 for th...
- 4.6.3: Use Adaptive quadrature to approximate the following integrals to w...
- 4.6.4: Use Adaptive quadrature to approximate the following integrals to w...
- 4.6.5: Use Simpsons Composite rule with n = 4, 6, 8, ... , until successiv...
- 4.6.6: Sketch the graphs of sin(1/x) and cos(1/x) on [0.1, 2]. Use Adaptiv...
- 4.6.7: The differential equation mu(t) + ku(t) = F0 cos t describes a spri...
- 4.6.8: If the term cu (t) is added to the left side of the motion equation...
- 4.6.9: Let T(a, b) and T(a, a+b 2 ) + T( a+b 2 , b) be the single and doub...
- 4.6.10: The study of light diffraction at a rectangular aperture involves t...
Solutions for Chapter 4.6: Adaptive Quadrature Methods
Full solutions for Numerical Analysis | 9th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
peA) = det(A - AI) has peA) = zero matrix.
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
Every v in V is orthogonal to every w in W.
Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or -1) based on the number of row exchanges to reach I.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).