×
×

# Solutions for Chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian

## Full solutions for Elementary Differential Equations | 10th Edition

ISBN: 9780470458327

Solutions for Chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian

Solutions for Chapter 3.2
4 5 0 325 Reviews
26
1
##### ISBN: 9780470458327

This textbook survival guide was created for the textbook: Elementary Differential Equations, edition: 10. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian includes 51 full step-by-step solutions. Since 51 problems in chapter 3.2: Solutions of Linear Homogeneous Equations; the Wronskian have been answered, more than 11660 students have viewed full step-by-step solutions from this chapter. Elementary Differential Equations was written by and is associated to the ISBN: 9780470458327.

Key Math Terms and definitions covered in this textbook
• Augmented matrix [A b].

Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

• Characteristic equation det(A - AI) = O.

The n roots are the eigenvalues of A.

• Diagonalizable matrix A.

Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.

• Diagonalization

A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.

• Dimension of vector space

dim(V) = number of vectors in any basis for V.

• Ellipse (or ellipsoid) x T Ax = 1.

A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad

• Full row rank r = m.

Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

• Gauss-Jordan method.

Invert A by row operations on [A I] to reach [I A-I].

• Hankel matrix H.

Constant along each antidiagonal; hij depends on i + j.

• Left inverse A+.

If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.

• Linear transformation T.

Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

• Multiplication Ax

= Xl (column 1) + ... + xn(column n) = combination of columns.

• Nullspace N (A)

= All solutions to Ax = O. Dimension n - r = (# columns) - rank.

• Partial pivoting.

In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

• Projection p = a(aTblaTa) onto the line through a.

P = aaT laTa has rank l.

• Reduced row echelon form R = rref(A).

Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

• Singular Value Decomposition

(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

• Skew-symmetric matrix K.

The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

• Standard basis for Rn.

Columns of n by n identity matrix (written i ,j ,k in R3).

• Toeplitz matrix.

Constant down each diagonal = time-invariant (shift-invariant) filter.

×