 Chapter 1.1: Some Basic Mathematical Models; Direction Fields
 Chapter 1.2: Solutions of Some Differential Equations
 Chapter 1.3: Classification of Differential Equations
 Chapter 2: First Order Differential Equations
 Chapter 2.1: Linear Equations; Method of Integrating Factors
 Chapter 2.2: Separable Equations
 Chapter 2.3: Modeling with First Order Equations
 Chapter 2.4: Differences Between Linear and Nonlinear Equations
 Chapter 2.5: Autonomous Equations and Population Dynamics
 Chapter 2.6: Exact Equations and Integrating Factors
 Chapter 2.7: Numerical Approximations: Eulers Method
 Chapter 2.8: The Existence and Uniqueness Theorem
 Chapter 2.9: First Order Difference Equations
 Chapter 3.1: Homogeneous Equations with Constant Coefficients
 Chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian
 Chapter 3.3: Complex Roots of the Characteristic Equation
 Chapter 3.4: Repeated Roots; Reduction of Order
 Chapter 3.5: Nonhomogeneous Equations; Method of Undetermined Coefficients
 Chapter 3.6: Variation of Parameters
 Chapter 3.7: Mechanical and Electrical Vibrations
 Chapter 3.8: Forced Vibrations
 Chapter 4.1: General Theory of nth Order Linear Equations
 Chapter 4.2: Homogeneous Equations with Constant Coefficients
 Chapter 4.3: The Method of Undetermined Coefficients
 Chapter 4.4: The Method of Variation of Parameters
 Chapter 5.1: Elementary Differential Equations, 10th Edition 9780470458327 William E. Boyce / Richard C. DiPrima
 Chapter 5.2: Series Solutions Near an Ordinary Point, Part I
 Chapter 5.3: Series Solutions Near an Ordinary Point, Part II
 Chapter 5.4: Euler Equations; Regular Singular Points
 Chapter 5.5: Series Solutions Near a Regular Singular Point, Part I
 Chapter 5.6: Series Solutions Near a Regular Singular Point, Part II
 Chapter 5.7: Bessels Equation
 Chapter 6.1: Definition of the Laplace Transform
 Chapter 6.2: Solution of Initial Value Problems
 Chapter 6.3: Step Functions
 Chapter 6.4: Differential Equations with Discontinuous Forcing Functions
 Chapter 6.5: Impulse Functions
 Chapter 6.6: The Convolution Integral
 Chapter 7.1: Introduction
 Chapter 7.2: Review of Matrices
 Chapter 7.3: Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors
 Chapter 7.4: Elementary Differential Equations, 10th Edition 9780470458327 William E. Boyce / Richard C. DiPrima
 Chapter 7.5: Homogeneous Linear Systems with Constant Coefficients
 Chapter 7.6: Complex Eigenvalues
 Chapter 7.7: Fundamental Matrices
 Chapter 7.8: Repeated Eigenvalues
 Chapter 7.9: Nonhomogeneous Linear Systems
 Chapter 8.1: The Euler or Tangent Line Method
 Chapter 8.2: Improvements on the Euler Method
 Chapter 8.3: The RungeKutta Method
 Chapter 8.4: Multistep Methods
 Chapter 8.5: Systems of First Order Equations
 Chapter 8.6: More on Errors; Stability
 Chapter 9.1: The Phase Plane: Linear Systems
 Chapter 9.2: Autonomous Systems and Stability
 Chapter 9.3: Locally Linear Systems
 Chapter 9.4: Competing Species
 Chapter 9.5: PredatorPrey Equations
 Chapter 9.6: Liapunovs Second Method
 Chapter 9.7: Periodic Solutions and Limit Cycles
 Chapter 9.8: Chaos and Strange Attractors: The Lorenz Equations
Elementary Differential Equations 10th Edition  Solutions by Chapter
Full solutions for Elementary Differential Equations  10th Edition
ISBN: 9780470458327
Elementary Differential Equations  10th Edition  Solutions by Chapter
Get Full SolutionsThe full stepbystep solution to problem in Elementary Differential Equations were answered by , our top Math solution expert on 03/13/18, 08:19PM. This expansive textbook survival guide covers the following chapters: 61. Since problems from 61 chapters in Elementary Differential Equations have been answered, more than 4066 students have viewed full stepbystep answer. Elementary Differential Equations was written by and is associated to the ISBN: 9780470458327. This textbook survival guide was created for the textbook: Elementary Differential Equations, edition: 10.

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib IIĀ· Condition numbers measure the sensitivity of the output to change in the input.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.
I don't want to reset my password
Need help? Contact support
Having trouble accessing your account? Let us help you, contact support at +1(510) 9441054 or support@studysoup.com
Forgot password? Reset it here