- Chapter 1: First-Order Differential Equations
- Chapter 1.1: First-Order Differential Equations
- Chapter 1.2: Integrals as General and Particular Solutions
- Chapter 1.3: Slope Fields and Solution Curves
- Chapter 1.4: Separable Equations and Applications
- Chapter 1.5: Linear First-Order Equations
- Chapter 1.6: Substitution Methods and Exact Equations
- Chapter 10.1: SturmLiouville Problems and Eigenfunction Expansions
- Chapter 10.2: Applications of Eigenfunction Series
- Chapter 10.3: Steady Periodic Solutions and Natural Frequencies
- Chapter 10.4: Cylindrical Coordinate Problems
- Chapter 2.1: Population Models
- Chapter 2.2: Equilibrium Solutions and Stability
- Chapter 2.3: AccelerationVelocity Models
- Chapter 2.4: Numerical Approximation: Eulers Method
- Chapter 2.5: A Closer Look at the Euler Method
- Chapter 2.6: The RungeKutta Method
- Chapter 3.1: Introduction: Second-Order Linear Equations
- Chapter 3.2: General Solutions of Linear Equations
- Chapter 3.3: Homogeneous Equations with Constant Coefficients
- Chapter 3.4: Mechanical Vibrations
- Chapter 3.5: Nonhomogeneous Equations and Undetermined Coefficients
- Chapter 3.6: Forced Oscillations and Resonance
- Chapter 3.7: Electrical Circuits
- Chapter 3.8: Mathematical Models and Numerical Methods
- Chapter 4.1: First-Order Systems and Applications
- Chapter 4.2: The Method of Elimination
- Chapter 4.3: Numerical Methods for Systems
- Chapter 5.1: Matrices and Linear Systems
- Chapter 5.2: The Eigenvalue Method for Homogeneous Systems
- Chapter 5.3: A Gallery of Solution Curves of Linear Systems
- Chapter 5.4: Second-Order Systems and Mechanical Applications
- Chapter 5.5: Multiple Eigenvalue Solutions
- Chapter 5.6: Matrix Exponentials and Linear Systems
- Chapter 5.7: Nonhomogeneous Linear Systems
- Chapter 6.1: Stability and the Phase Plane
- Chapter 6.2: Linear and Almost Linear Systems
- Chapter 6.3: Ecological Models: Predators and Competitors
- Chapter 6.4: Nonlinear Mechanical Systems
- Chapter 7.1: Laplace Transforms and Inverse Transforms
- Chapter 7.2: Transformation of Initial Value Problems
- Chapter 7.3: Translation and Partial Fractions
- Chapter 7.4: Derivatives, Integrals, and Products of Transforms
- Chapter 7.5: Periodic and Piecewise Continuous Input Functions
- Chapter 7.6: Laplace Transform Methods
- Chapter 8.1: Introduction and Review of Power Series
- Chapter 8.2: Series Solutions Near Ordinary Points
- Chapter 8.3: Regular Singular Points
- Chapter 8.4: Method of Frobenius: The Exceptional Cases
- Chapter 8.5: Bessels Equation
- Chapter 8.6: Applications of Bessel Functions
- Chapter 9.1: Periodic Functions and Trigonometric Series
- Chapter 9.2: General Fourier Series and Convergence
- Chapter 9.3: Fourier Sine and Cosine Series
- Chapter 9.4: Applications of Fourier Series
- Chapter 9.5: Heat Conduction and Separation of Variables
- Chapter 9.6: Vibrating Strings and the One-Dimensional Wave Equation
- Chapter 9.7: Fourier Series Methods and Partial Differential Equations
Differential Equations and Boundary Value Problems: Computing and Modeling 5th Edition - Solutions by Chapter
Full solutions for Differential Equations and Boundary Value Problems: Computing and Modeling | 5th Edition
Differential Equations and Boundary Value Problems: Computing and Modeling | 5th Edition - Solutions by ChapterGet Full Solutions
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.
Outer product uv T
= column times row = rank one matrix.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.