- Chapter 1.1: Review of Calculus
- Chapter 1.2: Round-off Errors and Computer Arithmetic
- Chapter 1.3: Algorithms and Convergence
- Chapter 10.1: Fixed Points for Functions of Several Variables
- Chapter 10.2: Newton's Method
- Chapter 10.3: Quasi-Newton Methods
- Chapter 10.4: Steepest Descent Techniques
- Chapter 10.5: Homotopy and Continuation Methods
- Chapter 11.1: The Linear Shooting Method
- Chapter 11.2: The Shooting Method for Nonlinear Problems
- Chapter 11.3: Finite-Difference Methods for Linear Problems
- Chapter 11.4: Finite-Difference Methods for Nonlinear Problems
- Chapter 11.5: The Rayleigh-Ritz Method
- Chapter 12.1: Elliptic Partial Differential Equation
- Chapter 12.2: Parabolic Partial Differential Equation
- Chapter 12.3: Hyperbolic Partial Differential Equations
- Chapter 12.4: An Introduction to the Finite-Element Method
- Chapter 2.1: The Bisection Method
- Chapter 2.2: Fixed-Point Iteration
- Chapter 2.3: Newton's Method and Its Extensions
- Chapter 2.4: Error Analysis for Iterative Methods
- Chapter 2.5: Accelerating Convergence
- Chapter 2.6: Zeros of Polynomials and Muller's Method
- Chapter 3.1: Interpolation and the Lagrange Polynomial
- Chapter 3.2: Data Approximation and Neville's Method
- Chapter 3.3: Divided Differences
- Chapter 3.4: Hermite Interpolation
- Chapter 3.5: Cubic Spline Interpolation1
- Chapter 3.6: Parametric Curves
- Chapter 4.1: Numerical Differentiation
- Chapter 4.10: Numerical Software and Chapter Review
- Chapter 4.2: Richardson's Extrapolation
- Chapter 4.3: Elements of Numerical Integration
- Chapter 4.4: Composite Numerical Integration
- Chapter 4.5: Romberg Integration
- Chapter 4.6: Adaptive Quadrature Methods
- Chapter 4.7: Gaussian Quadrature
- Chapter 4.8: Multiple Integrals
- Chapter 4.9: Improper Integrals
- Chapter 5.1: The Elementary Theory of Initial-Value Problems
- Chapter 5.10: Stability
- Chapter 5.11: Stiff Differential Equations
- Chapter 5.12: Numerical Software
- Chapter 5.2: Euler's Method
- Chapter 5.3: Higher-Order Taylor Methods
- Chapter 5.4: Runge-Kutta Methods
- Chapter 5.5: Error Control and the Runge-Kutta-Fehlberg Method
- Chapter 5.6: Multistep Method
- Chapter 5.7: Variable Step-Size Multistep Methods
- Chapter 5.8: Extrapolation Methods
- Chapter 5.9: Higher-Order Equations and Systems of Differential Equations
- Chapter 6.1: Linear Systems of Equations
- Chapter 6.2: Pivoting Strategies
- Chapter 6.3: Linear Algebra and Matrix Inversion
- Chapter 6.4: The Determinant of a Matrix
- Chapter 6.5: Matrix Factorization
- Chapter 6.6: Special Types of Matrices
- Chapter 6.7: Numerical Software
- Chapter 7.1: Norms of Vectors and Matrices
- Chapter 7.2: Eigenvalues and Eigenvectors
- Chapter 7.3: The Jacobi and Gauss-Siedel Iterative Techniques
- Chapter 7.4: Relaxation Techniques for Solving Linear Systems
- Chapter 7.5: Error Bounds and Iterative Refinement
- Chapter 7.6: The Conjugate Gradient Method
- Chapter 8.1: Discrete Least Squares Approximation
- Chapter 8.2: Orthogonal Polynomials and Least Squares Approximation
- Chapter 8.3: Chebyshev Polynomials and Economization of Power Series
- Chapter 8.4: Rational Function Approximation
- Chapter 8.5: Trigonometric Polynomial Approximation
- Chapter 8.6: Fast Fourier Transforms
- Chapter 9.1: Linear Algebra and Eigenvalues
- Chapter 9.2: Orthogonal Matrices and Similarity Transformations
- Chapter 9.3: The Power Method
- Chapter 9.4: Householder's Method
- Chapter 9.5: The QR Algorithm
- Chapter 9.6: Singular Value Decomposition
Numerical Analysis 10th Edition - Solutions by Chapter
Full solutions for Numerical Analysis | 10th Edition
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
A directed graph that has constants Cl, ... , Cm associated with the edges.
Every v in V is orthogonal to every w in W.
Outer product uv T
= column times row = rank one matrix.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.