- Chapter 1: First-order differential equations
- Chapter 1.10: The existence-uniqueness theorem; Picard iteration
- Chapter 1.11: Finding roots of equations by iteration
- Chapter 1.12: Difference equations, and how to compute the interest due on your student loans
- Chapter 1.13: Numerical approximations; Euler's method
- Chapter 1.14: The three term Taylor series method
- Chapter 1.15: An improved Euler method
- Chapter 1.16: The Runge-Kutta method
- Chapter 1.17: What to do in practice
- Chapter 1.2: First-order linear differential equations
- Chapter 1.3: The Van Meegeren art forgeries
- Chapter 1.4: Separable equations
- Chapter 1.5: Population models
- Chapter 1.6: The spread of technological innovations
- Chapter 1.7: An atomic waste disposal problem
- Chapter 1.8: The dynamics of tumor growth, mixing problems and orthogonal trajectories
- Chapter 1.9: Exact equations, and why we cannot solve very many differential equations
- Chapter 2: Second-order linear differential equations
- Chapter 2.1: Algebraic properties of solutions
- Chapter 2.1 1: Differential equations with discontinuous right-hand sides
- Chapter 2.10: Some useful properties of Laplace transforms
- Chapter 2.12: The Dirac delta function
- Chapter 2.13: The convolution integral
- Chapter 2.14: The method of elimination for systems
- Chapter 2.15: Higher-order equations
- Chapter 2.2: Linear equations with constant coefficients
- Chapter 2.3: The nonhomogeneous equation
- Chapter 2.4: The method of variation of parameters
- Chapter 2.5: The method of judicious guessing
- Chapter 2.6: Mechanical vibrations
- Chapter 2.7: A model for the detection of diabetes
- Chapter 2.8: Series solutions
- Chapter 2.9: The method of Laplace transforms
- Chapter 3.1: Algebraic properties of solutions of linear systems
- Chapter 3.1 1: Fundamental matrix solutions; e *'
- Chapter 3.10: Equal roots
- Chapter 3.12: The nonhomogeneous equation; variation of parameters
- Chapter 3.13: Solving systems by Laplace transforms
- Chapter 3.2: Vector spaces
- Chapter 3.3: Dimension of a vector space
- Chapter 3.4: Applications of linear algebra to differential equations
- Chapter 3.5: The theory of determinants
- Chapter 3.6: Solutions of simultaneous linear equations
- Chapter 3.7: Linear transformations
- Chapter 3.8: The eigenvalue-eigenvector method of finding solutions
- Chapter 3.9: Complex roots
- Chapter 4.1: Introduction
- Chapter 4.1 1: The principle of competitive exclusion in population biology
- Chapter 4.12: The Threshold Theorem of epidemiology
- Chapter 4.13: A model for the spread of gonorrhea
- Chapter 4.2: Stability of linear systems
- Chapter 4.3: Stability of equilibrium solutions
- Chapter 4.4: The phase-plane
- Chapter 4.5: Mathematical theories of war
- Chapter 4.6: Qualitative properties of orbits
- Chapter 4.7: Phase portraits of linear systems
- Chapter 4.8: Long time behavior of solutions; the PoincarLBendixson Theorem
- Chapter 4.9: Introduction to bifurcation theory
- Chapter 5.1: Two point boundary-value problems
- Chapter 5.3: Introduction to partial differential equations
- Chapter 5.4: Fourier series
- Chapter 5.5: Even and odd functions
- Chapter 5.6: Return to the heat equation
- Chapter 5.7: The wave equation
- Chapter 5.8: Laplace's equation
Differential Equations and Their Applications: An Introduction to Applied Mathematics 3rd Edition - Solutions by Chapter
Full solutions for Differential Equations and Their Applications: An Introduction to Applied Mathematics | 3rd Edition
Differential Equations and Their Applications: An Introduction to Applied Mathematics | 3rd Edition - Solutions by ChapterGet Full Solutions
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
Column space C (A) =
space of all combinations of the columns of A.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
= Xl (column 1) + ... + xn(column n) = combination of columns.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).