- 10.10.1.119: The second-order differential equation x f(x) g(x) 0 can be written...
- 10.10.1.120: If X X(t) is a solution to a plane autonomous system and X(t1) X(t2...
- 10.10.1.121: If the trace of the matrix A is 0 and det A 0, then the critical po...
- 10.10.1.122: If the critical point (0, 0) of the linear system X AX is a stable ...
- 10.10.1.123: If the critical point (0, 0) of the linear system X AX is a saddle ...
- 10.10.1.124: If the Jacobian matrix A g(X1) at a critical point of a plane auton...
- 10.10.1.125: It is possible to show, using linearization, that a nonlinear plane...
- 10.10.1.126: All solutions to the pendulum equation are periodic.
- 10.10.1.127: For what value(s) of a does the plane autonomous system possess per...
- 10.10.1.128: For what values of n is x np an asymptotically stable critical poin...
- 10.10.1.129: Solve the nonlinear plane autonomous system by switching to polar c...
- 10.10.1.130: Discuss the geometric nature of the solutions to the linear system ...
- 10.10.1.131: Classify the critical point (0, 0) of the given linear system by co...
- 10.10.1.132: Find and classify (if possible) the critical points of the plane au...
- 10.10.1.133: Determine the value(s) of a for which (0, 0) is a stable critical p...
- 10.10.1.134: Classify the critical point (0, 0) of the plane autonomous system c...
- 10.10.1.135: Without solving explicitly, classify (if possible) the critical poi...
- 10.10.1.136: Use the phase-plane method to show that the solutions to the nonlin...
- 10.10.1.137: In Section 5.1 we assumed that the restoring force F of the spring ...
- 10.10.1.138: The rod of a pendulum is attached to a movable joint at a point P a...
Solutions for Chapter 10: Plane Autonomous Systems
Full solutions for Differential Equations with Boundary-Value Problems, | 8th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
A sequence of steps intended to approach the desired solution.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Nullspace matrix N.
The columns of N are the n - r special solutions to As = O.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.