- 5.11.1: Solve the following stiff initial-value problems using Euler's meth...
- 5.11.2: Solve the following stiff initial-value problems using Euler's meth...
- 5.11.3: Repeat Exercise I using the Runge-Kutta fourth-order method.
- 5.11.4: Repeat Exercise 2 using the Runge-Kutta fourth-order method.
- 5.11.5: Repeat Exercise 1 using the Adams fourth-order predictor-corrector ...
- 5.11.6: Repeat Exercise 2 using the Adams fourth-order predictor-corrector ...
- 5.11.7: Repeat Exercise 1 using the Trapezoidal Algorithm with TOL = I0~5 .
- 5.11.8: Repeat Exercise 2 using the Trapezoidal Algorithm with TOL 10-5
- 5.11.9: Solve the following stiff initial-value problem using the Runge-Kut...
- 5.11.10: Show thatthe fourth-order Runge-Kutta method, A:, = hf(ti,Wi), ki =...
- 5.11.11: The Backward Euler one-step method is defined by w/+] = Wi + hfiti+...
- 5.11.12: Apply the Backward Euler method to the differential equations given...
- 5.11.13: Apply the Backward Euler method to the differential equations given...
- 5.11.14: a. Show that the Implicit Trapezoidal method is A-stable. b. Show t...
Solutions for Chapter 5.11: Stiff Differential Equations
Full solutions for Numerical Analysis | 10th Edition
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
peA) = det(A - AI) has peA) = zero matrix.
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Remove row i and column j; multiply the determinant by (-I)i + j •
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.