- 184.108.40.206.1: (a) Show that the truncation error for the centered difference appr...
- 220.127.116.11.2: Derive (6.2.15) by twice using the centered difference approximatio...
- 18.104.22.168.3: Derive the truncation error for the centered difference approximati...
- 22.214.171.124.4: Suppose that we did not know (6.2.15) but thought it possible to ap...
- 126.96.36.199.5: Derive the most accurate five-point approximation for f (x0) involv...
- 188.8.131.52.6: Derive an approximation for 2u/xy whose truncation error is O(x) 2....
- 184.108.40.206.7: Derive an approximation for 2u/xy whose truncation error is O(x) 2....
Solutions for Chapter 6.2: Finite Difference Numerical Methods for Partial Differential Equations
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems | 5th Edition
Solutions for Chapter 6.2: Finite Difference Numerical Methods for Partial Differential EquationsGet Full Solutions
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).