- 10.3.10.3.1: Show that the Fourier transform is a linear operator; that is, show...
- 10.3.10.3.2: Show that the inverse Fourier transform is a linear operator; that ...
- 10.3.10.3.3: Let F() be the Fourier transform of f(x). Show that if f(x) is real...
- 10.3.10.3.4: Show that F f(x; ) d = F(; ) d. 1
- 10.3.10.3.5: If F() is the Fourier transform of f(x), show that the inverse Four...
- 10.3.10.3.6: If f(x) = 0 |x| > a 1 |x| < a, determine the Fourier transform of f...
- 10.3.10.3.7: If F() = e||( > 0), determine the inverse Fourier transform of F()....
- 10.3.10.3.8: If F() = e||( > 0), determine the inverse Fourier transform of F()....
- 10.3.10.3.9: (a) Multiply (10.3.6) by eix, and integrate from L to L to show tha...
- 10.3.10.3.10: Consider the circularly symmetric heat equation on an infinite two-...
- 10.3.10.3.11: (a) If f(x) is a function with unit area, show that the scaled and ...
- 10.3.10.3.12: Show that limb b+ix/2 b es2 ds = 0, where s = b + iy (0 < y < x/2).
- 10.3.10.3.13: Evaluate I = 0 ek2t cos x d in the following way: determine I/x, an...
- 10.3.10.3.14: The gamma function (x) is defined as follows:Show that (a) (1) = 1 ...
- 10.3.10.3.15: (a) Using the definition of the gamma function in Exercise 10.3.14,...
- 10.3.10.3.16: Evaluate 0 ype kyn dy in terms of the gamma function (see Exercise ...
- 10.3.10.3.17: From complex variables, it is known that e i3/3 d = 0 for any close...
- 10.3.10.3.18: (a) For what does e(xx0)2 have unit area for <x< ? (b) Show that th...
Solutions for Chapter 10.3: Infinite Domain Problems: Fourier Transform Solutions of Partial Differential Equations
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems | 5th Edition
Solutions for Chapter 10.3: Infinite Domain Problems: Fourier Transform Solutions of Partial Differential EquationsGet Full Solutions
Upper triangular systems are solved in reverse order Xn to Xl.
Column space C (A) =
space of all combinations of the columns of A.
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
A directed graph that has constants Cl, ... , Cm associated with the edges.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.