 5.3.1: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.2: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.3: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.4: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.5: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.6: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.7: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.8: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.9: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.10: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.11: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.12: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.13: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.14: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.15: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.16: For each of the systems in 1 through 16 in Section 5.2, categorize ...
 5.3.17: The phase portraits in through 28 correspond to linear systems of t...
 5.3.18: The phase portraits in through 28 correspond to linear systems of t...
 5.3.19: The phase portraits in through 28 correspond to linear systems of t...
 5.3.20: The phase portraits in through 28 correspond to linear systems of t...
 5.3.21: The phase portraits in through 28 correspond to linear systems of t...
 5.3.22: The phase portraits in through 28 correspond to linear systems of t...
 5.3.23: The phase portraits in through 28 correspond to linear systems of t...
 5.3.24: The phase portraits in through 28 correspond to linear systems of t...
 5.3.25: The phase portraits in through 28 correspond to linear systems of t...
 5.3.26: The phase portraits in through 28 correspond to linear systems of t...
 5.3.27: The phase portraits in through 28 correspond to linear systems of t...
 5.3.28: The phase portraits in through 28 correspond to linear systems of t...
 5.3.29: We can give a simpler description of the general solution x.t / D c...
 5.3.30: Use the chain rule for vectorvalued functions to verify the
 5.3.31: Use the definitions of eigenvalue and eigenvector (Section 5.2) to ...
 5.3.32: Show that the system x0 D Ax has constant solutions other than x.t ...
 5.3.33: (a) Show that if A has the repeated eigenvalue with two linearly in...
 5.3.34: Verify Eq. (53) by substituting the expressions for x1.t / and x2.t...
 5.3.35: The system in Example 11 can be rewritten in scalar form as x0 1 D ...
 5.3.36: In analytic geometry it is shown that the general quadratic equatio...
 5.3.37: It can be further shown that Eq. (65) represents in general a conic...
 5.3.38: Let v D 3 C 5i 4 T be the complex eigenvector found in Example 11 a...
 5.3.39: Let A denote the 2 2 matrix A D a b c d : (a) Show that the charact...
 5.3.40: Use the eigenvalue/eigenvector method to confirm the solution in Eq...
Solutions for Chapter 5.3: A Gallery of Solution Curves of Linear Systems
Full solutions for Differential Equations and Boundary Value Problems: Computing and Modeling  5th Edition
ISBN: 9780321796981
Solutions for Chapter 5.3: A Gallery of Solution Curves of Linear Systems
Get Full SolutionsChapter 5.3: A Gallery of Solution Curves of Linear Systems includes 40 full stepbystep solutions. Since 40 problems in chapter 5.3: A Gallery of Solution Curves of Linear Systems have been answered, more than 16546 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Differential Equations and Boundary Value Problems: Computing and Modeling, edition: 5. Differential Equations and Boundary Value Problems: Computing and Modeling was written by and is associated to the ISBN: 9780321796981.

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Nullspace matrix N.
The columns of N are the n  r special solutions to As = O.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Row space C (AT) = all combinations of rows of A.
Column vectors by convention.

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.

Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.