- 18.104.22.168.1: Consider c u t = x K0 u x + u, where c, , K0, and are functions of ...
- 22.214.171.124.2: Consider c u t = x K0 u x + u, where c, , K0, and are functions of ...
- 126.96.36.199.3: Solve u t = k 1 r r r u r with u(r, 0) = f(r), u(0, t) bounded, and...
- 188.8.131.52.4: Consider the following boundary value problem: u t = k 2u x2 with u...
- 184.108.40.206.5: Consider u2 t2 = T0 2u x2 + u, where (x) > 0, (x) < 0, and T0 is co...
- 220.127.116.11.6: Consider the vibrations of a nonuniform string of mass density 0(x)...
Solutions for Chapter 5.4: SturmLiouville Eigenvalue Problems
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems | 5th Edition
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
A = CTC = (L.J]))(L.J]))T for positive definite A.
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.