- 11.4.1: Find a formal solution of the nonhomogeneous boundary value problem...
- 11.4.2: Consider the boundary value problem(xy)= xy,y, ybounded as x 0, y(1...
- 11.4.3: Consider the problem(xy)+ (k2/x)y = xy,y, ybounded as x 0, y(1) = 0...
- 11.4.4: Consider Legendres equation (see 22 through 24 of Section 5.3)[(1 x...
- 11.4.5: The equation (1 x2)y xy+ y = 0 (i)is Chebyshevs equation; see of Se...
Solutions for Chapter 11.4: Singular SturmLiouville Problems
Full solutions for Elementary Differential Equations and Boundary Value Problems | 10th Edition
Remove row i and column j; multiply the determinant by (-I)i + j •
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
Free columns of A.
Columns without pivots; these are combinations of earlier columns.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
A sequence of steps intended to approach the desired solution.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.