 LAB 7.1.1: Use Eulers method to approximate the solution of the initialvalue ...
 LAB 7.1.2: Use Eulers method, improved Eulers method, and RungeKutta to appro...
 LAB 7.1.3: Repeat Part 2 for the initialvalue problem dy dt = y + t 3 6, y(0)...
Solutions for Chapter LAB 7.1: Errors of Numerical Approximations
Full solutions for Differential Equations 00  4th Edition
ISBN: 9780495561989
Solutions for Chapter LAB 7.1: Errors of Numerical Approximations
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Differential Equations 00, edition: 4. Differential Equations 00 was written by and is associated to the ISBN: 9780495561989. Chapter LAB 7.1: Errors of Numerical Approximations includes 3 full stepbystep solutions. Since 3 problems in chapter LAB 7.1: Errors of Numerical Approximations have been answered, more than 17188 students have viewed full stepbystep solutions from this chapter.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Characteristic equation det(A  AI) = O.
The n roots are the eigenvalues of A.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Incidence matrix of a directed graph.
The m by n edgenode incidence matrix has a row for each edge (node i to node j), with entries 1 and 1 in columns i and j .

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.