- 22.214.171.124.1: Determine an upper and a (nonzero) lower bound for the lowest frequ...
- 126.96.36.199.2: Consider heat flow in a one-dimensional rod without sources with no...
- 188.8.131.52.3: Assume (5.7.1)(5.7.6) are valid. Consider the one-dimensional wave ...
Solutions for Chapter 5.7: SturmLiouville Eigenvalue Problems
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems | 5th Edition
Tv = Av + Vo = linear transformation plus shift.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
Invert A by row operations on [A I] to reach [I A-I].
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
Constant down each diagonal = time-invariant (shift-invariant) filter.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.