 14.2.14.2.1: For the following onedimensional partial differential equations, f...
 14.2.14.2.2: For the following twodimensional partial differential equations, f...
 14.2.14.2.3: Show that any linear partial differential equation with a onemode ...
 14.2.14.2.4: Show that any linear partial differential equation (with higher spa...
 14.2.14.2.5: Water waves satisfy 2 x2 + 2 y2 = 0 for y 0, where is a velocity po...
 14.2.14.2.6: Determine the dispersion relation for water waves with surface tens...
 14.2.14.2.7: Determine the dispersion relation for deep water waves by solving E...
 14.2.14.2.8: Derive the dispersion relation for an internal wave assuming a two...
 14.2.14.2.9: Compare phase and group velocities for (a) u t = 3u x3 (b) i u t = ...
 14.2.14.2.10: Determine the group velocity for water waves satisfying 2 = gk tanh kh
 14.2.14.2.11: Tsunamis (water waves generated by earthquakes) are long waves with...
Solutions for Chapter 14.2: Dispersive Waves: Slow Variations, Stability, Nonlinearity, and Perturbation Methods
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems  5th Edition
ISBN: 9780321797056
Solutions for Chapter 14.2: Dispersive Waves: Slow Variations, Stability, Nonlinearity, and Perturbation Methods
Get Full SolutionsChapter 14.2: Dispersive Waves: Slow Variations, Stability, Nonlinearity, and Perturbation Methods includes 11 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Applied Partial Differential Equations with Fourier Series and Boundary Value Problems was written by and is associated to the ISBN: 9780321797056. Since 11 problems in chapter 14.2: Dispersive Waves: Slow Variations, Stability, Nonlinearity, and Perturbation Methods have been answered, more than 8879 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Applied Partial Differential Equations with Fourier Series and Boundary Value Problems, edition: 5.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Column space C (A) =
space of all combinations of the columns of A.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)ยท(b  Ax) = o.

Nullspace matrix N.
The columns of N are the n  r special solutions to As = O.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.