- 22.214.171.124.1: Solve for u(x, t) using Laplace transforms: 2u t2 = c 2 2u x2 u(x, ...
- 126.96.36.199.2: Solve for u(x, t) using Laplace transforms: 2u t2 = c 2 2u x2 u(x, ...
- 188.8.131.52.3: Solve for u(x, t) using Laplace transforms: u t = k 2u x2 subject t...
- 184.108.40.206.4: Consider 2u t2 = c 2 2u x2 + sin 0t u(x, 0) = 0, u(0, t)=0 u t (x, ...
Solutions for Chapter 13.8: Laplace Transform Solution of Partial Differential Equations
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems | 5th Edition
Solutions for Chapter 13.8: Laplace Transform Solution of Partial Differential EquationsGet Full Solutions
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).