 5.11.1: Solve the following stiff initialvalue problems using Euler's meth...
 5.11.2: Solve the following stiff initialvalue problems using Euler's meth...
 5.11.3: Repeat Exercise I using the RungeKutta fourthorder method.
 5.11.4: Repeat Exercise 2 using the RungeKutta fourthorder method.
 5.11.5: Repeat Exercise 1 using the Adams fourthorder predictorcorrector ...
 5.11.6: Repeat Exercise 2 using the Adams fourthorder predictorcorrector ...
 5.11.7: Repeat Exercise 1 using the Trapezoidal Algorithm with TOL = I0~5 .
 5.11.8: Repeat Exercise 2 using the Trapezoidal Algorithm with TOL 105
 5.11.9: Solve the following stiff initialvalue problem using the RungeKut...
 5.11.10: Show thatthe fourthorder RungeKutta method, A:, = hf(ti,Wi), ki =...
 5.11.11: The Backward Euler onestep method is defined by w/+] = Wi + hfiti+...
 5.11.12: Apply the Backward Euler method to the differential equations given...
 5.11.13: Apply the Backward Euler method to the differential equations given...
 5.11.14: a. Show that the Implicit Trapezoidal method is Astable. b. Show t...
Solutions for Chapter 5.11: Stiff Differential Equations
Full solutions for Numerical Analysis  10th Edition
ISBN: 9781305253667
Solutions for Chapter 5.11: Stiff Differential Equations
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Numerical Analysis was written by and is associated to the ISBN: 9781305253667. Chapter 5.11: Stiff Differential Equations includes 14 full stepbystep solutions. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 10. Since 14 problems in chapter 5.11: Stiff Differential Equations have been answered, more than 13887 students have viewed full stepbystep solutions from this chapter.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn  l . Cx = convolution c * x. Eigenvectors in F.

Cofactor Cij.
Remove row i and column j; multiply the determinant by (I)i + j •

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Incidence matrix of a directed graph.
The m by n edgenode incidence matrix has a row for each edge (node i to node j), with entries 1 and 1 in columns i and j .

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.