- 10.4.1: Use the method of Steepest Descent with TOL 0.05 to approximate the...
- 10.4.2: Use the method of Steepest Descent with TOL 0.05 to approximate the...
- 10.4.3: Use the results in Exercise I and Newton's method to approximate th...
- 10.4.4: Use the results of Exercise 2 and Newton's method to approximate th...
- 10.4.5: Use the method of Steepest Descent to approximate minima to within ...
- 10.4.6: Exercise 12 in Section 10.2 concerns a biological experiment to det...
- 10.4.7: As people grow older, they tend to wonder if they will outlive thei...
- 10.4.8: a. Show thatthe quadratic polynomial P(a) g\ + h\a + ftjafa - ai) i...
Solutions for Chapter 10.4: Steepest Descent Techniques
Full solutions for Numerical Analysis | 10th Edition
Tv = Av + Vo = linear transformation plus shift.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
= Xl (column 1) + ... + xn(column n) = combination of columns.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).