 1.1.1: Show that the following equations have at least one solution in the...
 1.1.2: Show that the following equations have at least one solution in the...
 1.1.3: Find intervals containing solutions to the following equations. a. ...
 1.1.4: Find intervals containing solutions to the following equations. a. ...
 1.1.5: Find maxu
 1.1.6: Find maXa^^ /(x) for the following functions and intervals. a. fi...
 1.1.7: Show that fix) is 0 at least once in the given intervals. a. fix) =...
 1.1.8: Suppose / G C\a, b] and fix) exists on (a, b). Show thatif fix) f 0...
 1.1.9: Let fix) = x 3 . a. Find the second Taylor polynomial W about xq = ...
 1.1.10: Find the third Taylor polynomial Pfx) for the function fix) fx + I ...
 1.1.11: Find the second Taylor polynomial #20*') for the function /(x) = e...
 1.1.12: Repeat Exercise 11 using xy = 7r/6.
 1.1.13: Find the third Taylor polynomial Pfx) for the function fix) = (x 1)...
 1.1.14: Let fix) 2x cos(2x) (x 2)2 and xy = 0. a. Find the third Taylor pol...
 1.1.15: Find the fourth Taylor polynomial /^(x) for the function f{x) xex ~...
 1.1.16: Use the error term of a Taylor polynomial to estimate the error inv...
 1.1.17: Use a Taylor polynomial about 7r/4 to approximate cos 42 to an accu...
 1.1.18: Let /(x) = (1 x)_1 and Xo = 0. Find the nth Taylor polynomial P(x) ...
 1.1.19: Let /(x) = e x and xo = 0. Find the nth Taylor polynomial P(x) for ...
 1.1.20: Find the nth Maclaurin polynomial Pn(x) for /(x) = arctanx
 1.1.21: The polynomial P2(x) = 1 ^x2 is to be used to approximate/(x) = cos...
 1.1.22: Use the Intermediate Value Theorem 1.11 and Rolle's Theorem 1.7 to ...
 1.1.23: A Maclaurin polynomial for e x is used to give the approximation 2....
 1.1.24: The errorfunction defined by 2 C x erf(x) / e~'' dt Jk JO gives th...
 1.1.25: The nth Taylor polynomial for a function / at xo is sometimes refer...
 1.1.26: Prove the Generalized Rolle's Theorem, Theorem 1.10, by verifying t...
 1.1.27: Example 3 stated that for all x we have  sinx < x. Use the foll...
 1.1.28: A function /: [a, ft] IR is said to satisfy a Lipschitz condition w...
 1.1.29: Suppose / e C[a, ft] and x\ and X2 are in [a, ft]. a. Show that a n...
 1.1.30: Let / g C\a, ft], and let p be in the open interval (a, ft). a. Sup...
Solutions for Chapter 1.1: Review of Calculus
Full solutions for Numerical Analysis  10th Edition
ISBN: 9781305253667
Solutions for Chapter 1.1: Review of Calculus
Get Full SolutionsNumerical Analysis was written by and is associated to the ISBN: 9781305253667. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 10. Since 30 problems in chapter 1.1: Review of Calculus have been answered, more than 13807 students have viewed full stepbystep solutions from this chapter. Chapter 1.1: Review of Calculus includes 30 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions.

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.