 Chapter 1.1: Some Basic Mathematical Models; Direction Fields
 Chapter 1.2: Solutions of Some Differential Equations
 Chapter 1.3: Classification of Differential Equations
 Chapter 2: First Order Differential Equations
 Chapter 2.1: Linear Equations; Method of Integrating Factors
 Chapter 2.2: Separable Equations
 Chapter 2.3: Modeling with First Order Equations
 Chapter 2.4: Differences Between Linear and Nonlinear Equations
 Chapter 2.5: Autonomous Equations and Population Dynamics
 Chapter 2.6: Exact Equations and Integrating Factors
 Chapter 2.7: Numerical Approximations: Eulers Method
 Chapter 2.8: The Existence and Uniqueness Theorem
 Chapter 2.9: First Order Difference Equations
 Chapter 3.1: Homogeneous Equations with Constant Coefficients
 Chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian
 Chapter 3.3: Complex Roots of the Characteristic Equation
 Chapter 3.4: Repeated Roots; Reduction of Order
 Chapter 3.5: Nonhomogeneous Equations; Method of Undetermined Coefficients
 Chapter 3.6: Variation of Parameters
 Chapter 3.7: Mechanical and Electrical Vibrations
 Chapter 3.8: Forced Vibrations
 Chapter 4.1: General Theory of nth Order Linear Equations
 Chapter 4.2: Homogeneous Equations with Constant Coefficients
 Chapter 4.3: The Method of Undetermined Coefficients
 Chapter 4.4: The Method of Variation of Parameters
 Chapter 5.1: Elementary Differential Equations, 10th Edition 9780470458327 William E. Boyce / Richard C. DiPrima
 Chapter 5.2: Series Solutions Near an Ordinary Point, Part I
 Chapter 5.3: Series Solutions Near an Ordinary Point, Part II
 Chapter 5.4: Euler Equations; Regular Singular Points
 Chapter 5.5: Series Solutions Near a Regular Singular Point, Part I
 Chapter 5.6: Series Solutions Near a Regular Singular Point, Part II
 Chapter 5.7: Bessels Equation
 Chapter 6.1: Definition of the Laplace Transform
 Chapter 6.2: Solution of Initial Value Problems
 Chapter 6.3: Step Functions
 Chapter 6.4: Differential Equations with Discontinuous Forcing Functions
 Chapter 6.5: Impulse Functions
 Chapter 6.6: The Convolution Integral
 Chapter 7.1: Introduction
 Chapter 7.2: Review of Matrices
 Chapter 7.3: Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors
 Chapter 7.4: Elementary Differential Equations, 10th Edition 9780470458327 William E. Boyce / Richard C. DiPrima
 Chapter 7.5: Homogeneous Linear Systems with Constant Coefficients
 Chapter 7.6: Complex Eigenvalues
 Chapter 7.7: Fundamental Matrices
 Chapter 7.8: Repeated Eigenvalues
 Chapter 7.9: Nonhomogeneous Linear Systems
 Chapter 8.1: The Euler or Tangent Line Method
 Chapter 8.2: Improvements on the Euler Method
 Chapter 8.3: The RungeKutta Method
 Chapter 8.4: Multistep Methods
 Chapter 8.5: Systems of First Order Equations
 Chapter 8.6: More on Errors; Stability
 Chapter 9.1: The Phase Plane: Linear Systems
 Chapter 9.2: Autonomous Systems and Stability
 Chapter 9.3: Locally Linear Systems
 Chapter 9.4: Competing Species
 Chapter 9.5: PredatorPrey Equations
 Chapter 9.6: Liapunovs Second Method
 Chapter 9.7: Periodic Solutions and Limit Cycles
 Chapter 9.8: Chaos and Strange Attractors: The Lorenz Equations
Elementary Differential Equations 10th Edition  Solutions by Chapter
Full solutions for Elementary Differential Equations  10th Edition
ISBN: 9780470458327
Elementary Differential Equations  10th Edition  Solutions by Chapter
Get Full SolutionsThe full stepbystep solution to problem in Elementary Differential Equations were answered by , our top Math solution expert on 03/13/18, 08:19PM. This expansive textbook survival guide covers the following chapters: 61. Since problems from 61 chapters in Elementary Differential Equations have been answered, more than 8781 students have viewed full stepbystep answer. Elementary Differential Equations was written by and is associated to the ISBN: 9780470458327. This textbook survival guide was created for the textbook: Elementary Differential Equations, edition: 10.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)ยท(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Nullspace matrix N.
The columns of N are the n  r special solutions to As = O.

Outer product uv T
= column times row = rank one matrix.

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.