 5.1.5.10.1: Consider the Fourier sine series for f(x) = 1 on the interval 0 x L...
 5.1.5.10.2: Obtain a formula for an infinite series using Parsevals equality ap...
 5.1.5.10.3: (c) Fourier sine series of f(x) = x on the interval 0 x L 5.10.3. C...
 5.1.5.10.4: (a) Using Parsevals equality, express the error in terms of the tai...
 5.1.5.10.5: Show that if L(f) = d dx p df dx + qf, then b a fL(f) dx = pf df dx...
 5.1.5.10.6: Assuming that the operations of summation and integration can be in...
 5.1.5.10.7: Using Exercises 5.10.5 and 5.10.6, prove that n=1 n2 n = pf df dx b...
 5.1.5.10.8: Using Exercises 5.10.5 and 5.10.6, prove that n=1 n2 n = pf df dx b...
Solutions for Chapter 5.1: SturmLiouville Eigenvalue Problems
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems  5th Edition
ISBN: 9780321797056
Solutions for Chapter 5.1: SturmLiouville Eigenvalue Problems
Get Full SolutionsSince 8 problems in chapter 5.1: SturmLiouville Eigenvalue Problems have been answered, more than 8161 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Applied Partial Differential Equations with Fourier Series and Boundary Value Problems, edition: 5. Applied Partial Differential Equations with Fourier Series and Boundary Value Problems was written by and is associated to the ISBN: 9780321797056. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 5.1: SturmLiouville Eigenvalue Problems includes 8 full stepbystep solutions.

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Companion matrix.
Put CI, ... ,Cn in row n and put n  1 ones just above the main diagonal. Then det(A  AI) = ±(CI + c2A + C3A 2 + .•. + cnA nl  An).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.