×
×

# Solutions for Chapter 2.1: Linear Equations; Method of Integrating Factors

## Full solutions for Elementary Differential Equations and Boundary Value Problems | 10th Edition

ISBN: 9780470458310

Solutions for Chapter 2.1: Linear Equations; Method of Integrating Factors

Solutions for Chapter 2.1
4 5 0 300 Reviews
20
1
##### ISBN: 9780470458310

This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 10. Elementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9780470458310. Since 42 problems in chapter 2.1: Linear Equations; Method of Integrating Factors have been answered, more than 16437 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 2.1: Linear Equations; Method of Integrating Factors includes 42 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
• Companion matrix.

Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).

• Eigenvalue A and eigenvector x.

Ax = AX with x#-O so det(A - AI) = o.

• Exponential eAt = I + At + (At)2 12! + ...

has derivative AeAt; eAt u(O) solves u' = Au.

• Hessenberg matrix H.

Triangular matrix with one extra nonzero adjacent diagonal.

• Inverse matrix A-I.

Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.

• lA-II = l/lAI and IATI = IAI.

The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

• Left nullspace N (AT).

Nullspace of AT = "left nullspace" of A because y T A = OT.

• Linear combination cv + d w or L C jV j.

• Linear transformation T.

Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

• Outer product uv T

= column times row = rank one matrix.

• Partial pivoting.

In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

• Pivot columns of A.

Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

• Rotation matrix

R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().

• Saddle point of I(x}, ... ,xn ).

A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

• Schur complement S, D - C A -} B.

Appears in block elimination on [~ g ].

• Simplex method for linear programming.

The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

• Singular matrix A.

A square matrix that has no inverse: det(A) = o.

• Special solutions to As = O.

One free variable is Si = 1, other free variables = o.

• Symmetric matrix A.

The transpose is AT = A, and aU = a ji. A-I is also symmetric.