 8.6.1: To obtain some idea of the possible dangers of small errors in the ...
 8.6.2: Consider the initial value problemy= t2 + ey, y(0) = 0. (i)Using th...
 8.6.3: Consider again the initial value problem (16) from Example 2. Inves...
 8.6.4: Consider the initial value problemy= 10y + 2.5t2 + 0.5t, y(0) = 4.(...
 8.6.5: In each of 5 and 6:(a) Find a formula for the solution of the initi...
 8.6.6: In each of 5 and 6:(a) Find a formula for the solution of the initi...
Solutions for Chapter 8.6: More on Errors; Stability
Full solutions for Elementary Differential Equations and Boundary Value Problems  10th Edition
ISBN: 9780470458310
Solutions for Chapter 8.6: More on Errors; Stability
Get Full SolutionsSince 6 problems in chapter 8.6: More on Errors; Stability have been answered, more than 16958 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 10. Elementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9780470458310. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 8.6: More on Errors; Stability includes 6 full stepbystep solutions.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.