 10.5.10.5.1: Consider F() = e, > 0 ( 0). (a) Derive the inverse Fourier sine tra...
 10.5.10.5.2: Consider f(x) = ex,> 0 (x 0). (a) Derive the Fourier sine transform...
 10.5.10.5.3: Derive either the Fourier cosine transform of ex2 or the Fourier si...
 10.5.10.5.4: (a) Derive (10.5.26) using Greens formula. (b) Do the same for (10....
 10.5.10.5.5: (a) Show that the Fourier sine transform of f(x) is an odd function...
 10.5.10.5.6: There is an interesting convolutiontype theorem for Fourier sine t...
 10.5.10.5.7: Derive the following; If a Fourier cosine transform in x, H(), is t...
 10.5.10.5.8: Solve (10.5.1)(10.5.3) using the convolution theorem of Exercise 10...
 10.5.10.5.9: Solve (10.5.1)(10.5.3) using the convolution theorem of Exercise 10...
 10.5.10.5.10: Determine the inverse cosine transform of e. (Hint: Use differentia...
 10.5.10.5.11: Consider u t = k 2u x2 , x > 0 t > 0 u(0, t)=1 u(x, 0) = f(x). (a) ...
 10.5.10.5.12: Solve u t = k 2u x2 (x > 0) u x(0, t)=0 u(x, 0) = f(x).
 10.5.10.5.13: Solve (10.5.28)(10.5.30) by solving (10.5.32).
 10.5.10.5.14: Consider u t = k 2u x2 v0 u x (x > 0) u(0, t)=0 u(x, 0) = f(x). (a)...
 10.5.10.5.15: Solve 2u x2 + 2u y2 = 0, 0
 10.5.10.5.16: Solve 2u x2 + 2u y2 = 0, 0
 10.5.10.5.17: The effect of periodic surface heating (either daily or seasonal) o...
 10.5.10.5.18: Reconsider Exercise 10.5.17. Determine u(x, t) exactly. (Hint: See ...
 10.5.10.5.19: (a) Determine a particular solution of Exercise 10.5.17, satisfying...
 10.5.10.5.20: Solve the heat equation, 0
Solutions for Chapter 10.5: Infinite Domain Problems: Fourier Transform Solutions of Partial Differential Equations
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems  5th Edition
ISBN: 9780321797056
Solutions for Chapter 10.5: Infinite Domain Problems: Fourier Transform Solutions of Partial Differential Equations
Get Full SolutionsThis textbook survival guide was created for the textbook: Applied Partial Differential Equations with Fourier Series and Boundary Value Problems, edition: 5. Chapter 10.5: Infinite Domain Problems: Fourier Transform Solutions of Partial Differential Equations includes 20 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 20 problems in chapter 10.5: Infinite Domain Problems: Fourier Transform Solutions of Partial Differential Equations have been answered, more than 8801 students have viewed full stepbystep solutions from this chapter. Applied Partial Differential Equations with Fourier Series and Boundary Value Problems was written by and is associated to the ISBN: 9780321797056.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Companion matrix.
Put CI, ... ,Cn in row n and put n  1 ones just above the main diagonal. Then det(A  AI) = ±(CI + c2A + C3A 2 + .•. + cnA nl  An).

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Fibonacci numbers
0,1,1,2,3,5, ... satisfy Fn = Fnl + Fn 2 = (A7 A~)I()q A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Solvable system Ax = b.
The right side b is in the column space of A.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.