 1.3.1: dy dt = t 2 + t
 1.3.2: dy dt = t 2 + 1 3
 1.3.3: dy dt = 1 2y
 1.3.4: dy dt = 4y2
 1.3.5: dy dt = 2y(1 y)
 1.3.6: dy dt = y + t + 1
 1.3.7: dy dt = 3y(1 y)
 1.3.8: dy dt = 2y t
 1.3.9: dy dt = y + 1 2 (y + t)
 1.3.10: dy dt = (t + 1)y
 1.3.11: Suppose we know that the function f (t, y) is continuous and that f...
 1.3.12: Suppose the constant function y(t) = 2 for all t is a solution of t...
 1.3.13: Suppose we know that the graph to the right is the graph of the rig...
 1.3.14: Suppose we know that the graph to the right is the graph of the rig...
 1.3.15: Consider the autonomous differential equation d S dt = S3 2S2 + S. ...
 1.3.16: Eight differential equations and four slope fields are given below....
 1.3.17: Suppose we know that the graph below is the graph of a solution to ...
 1.3.18: Suppose we know that the graph below is the graph of a solution to ...
 1.3.19: The spiking of a neuron can be modeled by the differential equation...
 1.3.20: By separating variables, find the general solution of the different...
 1.3.21: By separating variables, find the general solution of the different...
 1.3.22: By separating variables, find the solution of the initialvalue pro...
Solutions for Chapter 1.3: QUALITATIVE TECHNIQUE: SLOPE FIELDS
Full solutions for Differential Equations 00  4th Edition
ISBN: 9780495561989
Solutions for Chapter 1.3: QUALITATIVE TECHNIQUE: SLOPE FIELDS
Get Full SolutionsThis textbook survival guide was created for the textbook: Differential Equations 00, edition: 4. Since 22 problems in chapter 1.3: QUALITATIVE TECHNIQUE: SLOPE FIELDS have been answered, more than 16322 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Differential Equations 00 was written by and is associated to the ISBN: 9780495561989. Chapter 1.3: QUALITATIVE TECHNIQUE: SLOPE FIELDS includes 22 full stepbystep solutions.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)ยท(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.