 Chapter 1.1: Some Basic Mathematical Models; Direction Fields
 Chapter 1.2: Solutions of Some Differential Equations
 Chapter 1.3: Classification of Differential Equations
 Chapter 10.1: TwoPoint Boundary Value Problems
 Chapter 10.2: Fourier Series
 Chapter 10.3: The Fourier Convergence Theorem
 Chapter 10.4: Even and Odd Functions
 Chapter 10.5: Separation of Variables; Heat Conduction in a Rod
 Chapter 10.6: Other Heat Conduction Problems
 Chapter 10.7: The Wave Equation: Vibrations of an Elastic String
 Chapter 10.8: Laplaces Equation
 Chapter 11.1: The Occurrence of TwoPoint Boundary Value Problems
 Chapter 11.2: SturmLiouville Boundary Value Problems
 Chapter 11.3: Nonhomogeneous Boundary Value Problems
 Chapter 11.4: Singular SturmLiouville Problems
 Chapter 11.5: Further Remarks on the Method of Separation of Variables: A Bessel Series Expansion
 Chapter 11.6: Series of Orthogonal Functions: Mean Convergence
 Chapter 2: FirstOrder Differential Equations
 Chapter 2.1: Linear Differential Equations; Method of Integrating Factors
 Chapter 2.2: Separable Differential Equations
 Chapter 2.3: Modeling with FirstOrder Differential Equations
 Chapter 2.4: Differences Between Linear and Nonlinear Differential Equations
 Chapter 2.5: Autonomous Differential Equations and Population Dynamics
 Chapter 2.6: Exact Differential Equations and Integrating Factors
 Chapter 2.7: Numerical Approximations: Eulers Method
 Chapter 2.8: The Existence and Uniqueness Theorem
 Chapter 2.9: FirstOrder Difference Equations
 Chapter 3.1: Homogeneous Differential Equations with Constant Coefficients
 Chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian
 Chapter 3.3: Complex Roots of the Characteristic Equation
 Chapter 3.4: Repeated Roots; Reduction of Order
 Chapter 3.5: Nonhomogeneous Equations; Method of Undetermined Coefficients
 Chapter 3.6: Variation of Parameters
 Chapter 3.7: Mechanical and Electrical Vibrations
 Chapter 3.8: Forced Periodic Vibrations
 Chapter 4.1: General Theory of nth Order
 Chapter 4.2: Homogeneous Differential Equations with Constant Coefficients
 Chapter 4.3: The Method of Undetermined Coefficients
 Chapter 4.4: The Method of Variation of Parameters
 Chapter 5.1: Review of Power Series
 Chapter 5.2: Series Solutions Near an Ordinary Point, Part I
 Chapter 5.3: Series Solutions Near an Ordinary Point, Part II
 Chapter 5.4: Euler Equations; Regular Singular Points
 Chapter 5.5: Series Solutions Near a Regular Singular Point, Part I
 Chapter 5.6: Series Solutions Near a Regular Singular Point, Part II
 Chapter 5.7: Bessels Equation
 Chapter 6.1: Definition of the Laplace Transform
 Chapter 6.2: Solution of Initial Value Problems
 Chapter 6.3: Step Functions
 Chapter 6.4: Differential Equations with Discontinuous Forcing Functions
 Chapter 6.5: Impulse Functions
 Chapter 6.6: The Convolution Integral
 Chapter 7.1: Introduction
 Chapter 7.2: Matrices
 Chapter 7.3: Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors
 Chapter 7.4: Basic Theory of Systems of FirstOrder Linear Equations
 Chapter 7.5: Homogeneous Linear Systems with Constant Coefficients
 Chapter 7.6: ComplexValued Eigenvalues
 Chapter 7.7: Fundamental Matrices
 Chapter 7.8: Repeated Eigenvalues
 Chapter 7.9: Nonhomogeneous Linear Systems
 Chapter 8.1: The Euler or Tangent Line Method
 Chapter 8.2: Improvements on the Euler Method
 Chapter 8.3: The RungeKutta Method
 Chapter 8.4: Multistep Methods
 Chapter 8.5: Systems of FirstOrder Equations
 Chapter 8.6: More on Errors; Stability
 Chapter 9.1: The Phase Plane: Linear Systems
 Chapter 9.2: Autonomous Systems and Stability
 Chapter 9.3: Locally Linear Systems
 Chapter 9.4: Competing Species
 Chapter 9.5: Predator  Prey Equations
 Chapter 9.6: Liapunovs Second Method
 Chapter 9.7: Periodic Solutions and Limit Cycles
 Chapter 9.8: Chaos and Strange Attractors: The Lorenz Equations
Elementary Differential Equations and Boundary Value Problems 11th Edition  Solutions by Chapter
Full solutions for Elementary Differential Equations and Boundary Value Problems  11th Edition
ISBN: 9781119256007
Elementary Differential Equations and Boundary Value Problems  11th Edition  Solutions by Chapter
Get Full SolutionsThis expansive textbook survival guide covers the following chapters: 75. This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 11. Elementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9781119256007. The full stepbystep solution to problem in Elementary Differential Equations and Boundary Value Problems were answered by , our top Math solution expert on 03/13/18, 08:17PM. Since problems from 75 chapters in Elementary Differential Equations and Boundary Value Problems have been answered, more than 29566 students have viewed full stepbystep answer.

Column space C (A) =
space of all combinations of the columns of A.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Incidence matrix of a directed graph.
The m by n edgenode incidence matrix has a row for each edge (node i to node j), with entries 1 and 1 in columns i and j .

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Outer product uv T
= column times row = rank one matrix.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.