- 220.127.116.11.1: (a) Determine the Laplace transform of u(x, t) satisfying (13.6.1)(...
- 18.104.22.168.2: Determine the Greens function by inverting (13.6.11) with (13.6.12)...
- 22.214.171.124.3: Determine the Laplace transform of the Greens function for the wave...
- 126.96.36.199.4: Determine the Laplace transform of the Greens function for the wave...
- 188.8.131.52.5: Reconsider Exercise 13.6.4 if (a) h(t) = 0 and q(x, t)=0 (b) h(t) =...
- 184.108.40.206.6: Reconsider Exercise 13.6.4 if, instead, the boundary condition were...
Solutions for Chapter 13.6: Laplace Transform Solution of Partial Differential Equations
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems | 5th Edition
Solutions for Chapter 13.6: Laplace Transform Solution of Partial Differential EquationsGet Full Solutions
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Invert A by row operations on [A I] to reach [I A-I].
A symmetric matrix with eigenvalues of both signs (+ and - ).
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or -1) based on the number of row exchanges to reach I.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.