 11.4.1: Find a formal solution of the nonhomogeneous boundary value problem...
 11.4.2: Consider the boundary value problem(xy)= xy,y, ybounded as x 0, y(1...
 11.4.3: Consider the problem(xy)+ (k2/x)y = xy,y, ybounded as x 0, y(1) = 0...
 11.4.4: Consider Legendres equation (see 22 through 24 of Section 5.3)[(1 x...
 11.4.5: The equation (1 x2)y xy+ y = 0 (i)is Chebyshevs equation; see of Se...
Solutions for Chapter 11.4: Singular SturmLiouville Problems
Full solutions for Elementary Differential Equations and Boundary Value Problems  10th Edition
ISBN: 9780470458310
Solutions for Chapter 11.4: Singular SturmLiouville Problems
Get Full SolutionsElementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9780470458310. This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 10. This expansive textbook survival guide covers the following chapters and their solutions. Since 5 problems in chapter 11.4: Singular SturmLiouville Problems have been answered, more than 16433 students have viewed full stepbystep solutions from this chapter. Chapter 11.4: Singular SturmLiouville Problems includes 5 full stepbystep solutions.

Cofactor Cij.
Remove row i and column j; multiply the determinant by (I)i + j •

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Iterative method.
A sequence of steps intended to approach the desired solution.

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.