 2.4.1: In each of 1 through 6, determine (without solving the problem) an ...
 2.4.2: In each of 1 through 6, determine (without solving the problem) an ...
 2.4.3: In each of 1 through 6, determine (without solving the problem) an ...
 2.4.4: In each of 1 through 6, determine (without solving the problem) an ...
 2.4.5: In each of 1 through 6, determine (without solving the problem) an ...
 2.4.6: In each of 1 through 6, determine (without solving the problem) an ...
 2.4.7: In each of 7 through 12, state where in the typlane the hypotheses...
 2.4.8: In each of 7 through 12, state where in the typlane the hypotheses...
 2.4.9: In each of 7 through 12, state where in the typlane the hypotheses...
 2.4.10: In each of 7 through 12, state where in the typlane the hypotheses...
 2.4.11: In each of 7 through 12, state where in the typlane the hypotheses...
 2.4.12: In each of 7 through 12, state where in the typlane the hypotheses...
 2.4.13: In each of 13 through 16, solve the given initial value problem and...
 2.4.14: In each of 13 through 16, solve the given initial value problem and...
 2.4.15: In each of 13 through 16, solve the given initial value problem and...
 2.4.16: In each of 13 through 16, solve the given initial value problem and...
 2.4.17: In each of 17 through 20, draw a direction field and plot (or sketc...
 2.4.18: In each of 17 through 20, draw a direction field and plot (or sketc...
 2.4.19: In each of 17 through 20, draw a direction field and plot (or sketc...
 2.4.20: In each of 17 through 20, draw a direction field and plot (or sketc...
 2.4.21: Consider the initial value problem y= y1/3, y(0) = 0 from Example 3...
 2.4.22: (a) Verify that both y1(t) = 1 t and y2(t) = t2/4 are solutions of ...
 2.4.23: (a) Show that (t) = e2t is a solution of y 2y = 0 and that y = c(t)...
 2.4.24: Show that if y = (t) is a solution of y+ p(t)y = 0, then y = c(t) i...
 2.4.25: Let y = y1(t) be a solution ofy+ p(t)y = 0, (i)and let y = y2(t) be...
 2.4.26: (a) Show that the solution (7) of the general linear equation (1) c...
 2.4.27: (a) Show that the solution (7) of the general linear equation (1) c...
 2.4.28: In each of 28 through 31, the given equation is a Bernoulli equatio...
 2.4.29: In each of 28 through 31, the given equation is a Bernoulli equatio...
 2.4.30: In each of 28 through 31, the given equation is a Bernoulli equatio...
 2.4.31: In each of 28 through 31, the given equation is a Bernoulli equatio...
 2.4.32: Discontinuous Coefficients. Linear differential equations sometimes...
 2.4.33: Discontinuous Coefficients. Linear differential equations sometimes...
Solutions for Chapter 2.4: Differences Between Linear and Nonlinear Equations
Full solutions for Elementary Differential Equations and Boundary Value Problems  10th Edition
ISBN: 9780470458310
Solutions for Chapter 2.4: Differences Between Linear and Nonlinear Equations
Get Full SolutionsSince 33 problems in chapter 2.4: Differences Between Linear and Nonlinear Equations have been answered, more than 18064 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 10. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 2.4: Differences Between Linear and Nonlinear Equations includes 33 full stepbystep solutions. Elementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9780470458310.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Incidence matrix of a directed graph.
The m by n edgenode incidence matrix has a row for each edge (node i to node j), with entries 1 and 1 in columns i and j .

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·