 Chapter 1: Firstorder differential equations
 Chapter 1.10: The existenceuniqueness theorem; Picard iteration
 Chapter 1.11: Finding roots of equations by iteration
 Chapter 1.12: Difference equations, and how to compute the interest due on your student loans
 Chapter 1.13: Numerical approximations; Euler's method
 Chapter 1.14: The three term Taylor series method
 Chapter 1.15: An improved Euler method
 Chapter 1.16: The RungeKutta method
 Chapter 1.17: What to do in practice
 Chapter 1.2: Firstorder linear differential equations
 Chapter 1.3: The Van Meegeren art forgeries
 Chapter 1.4: Separable equations
 Chapter 1.5: Population models
 Chapter 1.6: The spread of technological innovations
 Chapter 1.7: An atomic waste disposal problem
 Chapter 1.8: The dynamics of tumor growth, mixing problems and orthogonal trajectories
 Chapter 1.9: Exact equations, and why we cannot solve very many differential equations
 Chapter 2: Secondorder linear differential equations
 Chapter 2.1: Algebraic properties of solutions
 Chapter 2.1 1: Differential equations with discontinuous righthand sides
 Chapter 2.10: Some useful properties of Laplace transforms
 Chapter 2.12: The Dirac delta function
 Chapter 2.13: The convolution integral
 Chapter 2.14: The method of elimination for systems
 Chapter 2.15: Higherorder equations
 Chapter 2.2: Linear equations with constant coefficients
 Chapter 2.3: The nonhomogeneous equation
 Chapter 2.4: The method of variation of parameters
 Chapter 2.5: The method of judicious guessing
 Chapter 2.6: Mechanical vibrations
 Chapter 2.7: A model for the detection of diabetes
 Chapter 2.8: Series solutions
 Chapter 2.9: The method of Laplace transforms
 Chapter 3.1: Algebraic properties of solutions of linear systems
 Chapter 3.1 1: Fundamental matrix solutions; e *'
 Chapter 3.10: Equal roots
 Chapter 3.12: The nonhomogeneous equation; variation of parameters
 Chapter 3.13: Solving systems by Laplace transforms
 Chapter 3.2: Vector spaces
 Chapter 3.3: Dimension of a vector space
 Chapter 3.4: Applications of linear algebra to differential equations
 Chapter 3.5: The theory of determinants
 Chapter 3.6: Solutions of simultaneous linear equations
 Chapter 3.7: Linear transformations
 Chapter 3.8: The eigenvalueeigenvector method of finding solutions
 Chapter 3.9: Complex roots
 Chapter 4.1: Introduction
 Chapter 4.1 1: The principle of competitive exclusion in population biology
 Chapter 4.12: The Threshold Theorem of epidemiology
 Chapter 4.13: A model for the spread of gonorrhea
 Chapter 4.2: Stability of linear systems
 Chapter 4.3: Stability of equilibrium solutions
 Chapter 4.4: The phaseplane
 Chapter 4.5: Mathematical theories of war
 Chapter 4.6: Qualitative properties of orbits
 Chapter 4.7: Phase portraits of linear systems
 Chapter 4.8: Long time behavior of solutions; the PoincarLBendixson Theorem
 Chapter 4.9: Introduction to bifurcation theory
 Chapter 5.1: Two point boundaryvalue problems
 Chapter 5.3: Introduction to partial differential equations
 Chapter 5.4: Fourier series
 Chapter 5.5: Even and odd functions
 Chapter 5.6: Return to the heat equation
 Chapter 5.7: The wave equation
 Chapter 5.8: Laplace's equation
Differential Equations and Their Applications: An Introduction to Applied Mathematics 3rd Edition  Solutions by Chapter
Full solutions for Differential Equations and Their Applications: An Introduction to Applied Mathematics  3rd Edition
ISBN: 9780387908069
Differential Equations and Their Applications: An Introduction to Applied Mathematics  3rd Edition  Solutions by Chapter
Get Full SolutionsSince problems from 65 chapters in Differential Equations and Their Applications: An Introduction to Applied Mathematics have been answered, more than 5502 students have viewed full stepbystep answer. This textbook survival guide was created for the textbook: Differential Equations and Their Applications: An Introduction to Applied Mathematics, edition: 3. This expansive textbook survival guide covers the following chapters: 65. The full stepbystep solution to problem in Differential Equations and Their Applications: An Introduction to Applied Mathematics were answered by , our top Math solution expert on 03/13/18, 07:00PM. Differential Equations and Their Applications: An Introduction to Applied Mathematics was written by and is associated to the ISBN: 9780387908069.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Column space C (A) =
space of all combinations of the columns of A.

Complex conjugate
z = a  ib for any complex number z = a + ib. Then zz = Iz12.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.