 11.6.1: Extend the results of Example 1 by finding the smallest value of n ...
 11.6.2: . Let f(x) = x for 0 < x < 1, and let m(x) = 2 sin mx.(a) Find the ...
 11.6.3: Follow the instructions for using f(x) = x(1 x) for 0 < x < 1.
 11.6.4: In this problem we show that pointwise convergence of a sequence Sn...
 11.6.5: Suppose that the functions 1, ... , n satisfy the orthonormality re...
 11.6.6: In this problem we show by examples that the (Riemann) integrabilit...
 11.6.7: Suppose that it is desired to construct a set of polynomials f0(x),...
 11.6.8: Suppose that it is desired to construct a set of polynomials P0(x),...
 11.6.9: Suppose that it is desired to construct a set of polynomials P0(x),...
 11.6.10: In 10 through 12, let 1, 2, ... , n, ... be the normalized eigenfun...
 11.6.11: In 10 through 12, let 1, 2, ... , n, ... be the normalized eigenfun...
 11.6.12: In 10 through 12, let 1, 2, ... , n, ... be the normalized eigenfun...
 11.6.13: Show that Parsevals equation in 9(e) is obtained formally by squari...
Solutions for Chapter 11.6: Series of Orthogonal Functions: Mean Convergence
Full solutions for Elementary Differential Equations and Boundary Value Problems  10th Edition
ISBN: 9780470458310
Solutions for Chapter 11.6: Series of Orthogonal Functions: Mean Convergence
Get Full SolutionsChapter 11.6: Series of Orthogonal Functions: Mean Convergence includes 13 full stepbystep solutions. This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 10. Elementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9780470458310. Since 13 problems in chapter 11.6: Series of Orthogonal Functions: Mean Convergence have been answered, more than 16825 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions.

Characteristic equation det(A  AI) = O.
The n roots are the eigenvalues of A.

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Toeplitz matrix.
Constant down each diagonal = timeinvariant (shiftinvariant) filter.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·