 6.2.6.2.1: (a) Show that the truncation error for the centered difference appr...
 6.2.6.2.2: Derive (6.2.15) by twice using the centered difference approximatio...
 6.2.6.2.3: Derive the truncation error for the centered difference approximati...
 6.2.6.2.4: Suppose that we did not know (6.2.15) but thought it possible to ap...
 6.2.6.2.5: Derive the most accurate fivepoint approximation for f (x0) involv...
 6.2.6.2.6: Derive an approximation for 2u/xy whose truncation error is O(x) 2....
 6.2.6.2.7: Derive an approximation for 2u/xy whose truncation error is O(x) 2....
Solutions for Chapter 6.2: Finite Difference Numerical Methods for Partial Differential Equations
Full solutions for Applied Partial Differential Equations with Fourier Series and Boundary Value Problems  5th Edition
ISBN: 9780321797056
Solutions for Chapter 6.2: Finite Difference Numerical Methods for Partial Differential Equations
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Applied Partial Differential Equations with Fourier Series and Boundary Value Problems, edition: 5. Applied Partial Differential Equations with Fourier Series and Boundary Value Problems was written by and is associated to the ISBN: 9780321797056. Chapter 6.2: Finite Difference Numerical Methods for Partial Differential Equations includes 7 full stepbystep solutions. Since 7 problems in chapter 6.2: Finite Difference Numerical Methods for Partial Differential Equations have been answered, more than 8644 students have viewed full stepbystep solutions from this chapter.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and

Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)ยท(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Row space C (AT) = all combinations of rows of A.
Column vectors by convention.

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).