- 2.6.1: Find the approximations to within 10-4 to all the real zeros of the...
- 2.6.2: Find approximations to within I0_5 to all the zeros of each of the ...
- 2.6.3: Repeat Exercise 1 using Muller's method.
- 2.6.4: Repeat Exercise 2 using Muller's method
- 2.6.5: Use Newton's method to find, within 10-3 , the zeros and critical p...
- 2.6.6: fix) I Ox3 8.3x2 + 2.295x - 0.21141 = 0 has a root at x = 0.29. Use...
- 2.6.7: Use each ofthe following methods to find a solution in [0.1, 1] acc...
- 2.6.8: Two ladders crisscross an alley of width W. Each ladder reaches fro...
- 2.6.9: A can in the shape of a right circular cylinder is to be constructe...
- 2.6.10: In 1224, Leonardo of Pisa, better known as Fibonacci, answered a ma...
Solutions for Chapter 2.6: Zeros of Polynomials and Muller's Method
Full solutions for Numerical Analysis | 10th Edition
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Invert A by row operations on [A I] to reach [I A-I].
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
A sequence of steps intended to approach the desired solution.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).