- Chapter 1.1: Review of Calculus
- Chapter 1.2: Round-off Errors and Computer Arithmetic
- Chapter 1.3: Algorithms and Convergence
- Chapter 10.1: Fixed Points for Functions of Several Variables
- Chapter 10.2: Newton's Method
- Chapter 10.3: Quasi-Newton Methods
- Chapter 10.4: Steepest Descent Techniques
- Chapter 10.5: Homotopy and Continuation Methods
- Chapter 11.1: The Linear Shooting Method
- Chapter 11.2: The Shooting Method for Nonlinear Problems
- Chapter 11.3: Finite-Difference Methods for Linear Problems
- Chapter 11.4: Finite-Difference Methods for Nonlinear Problems
- Chapter 11.5: The Rayleigh-Ritz Method
- Chapter 12.1: Elliptic Partial Differential Equations
- Chapter 12.2: Parabolic Partial Differential Equations
- Chapter 12.3: Hyperbolic Partial Differential Equations
- Chapter 12.4: An Introduction to the Finite-Element Method
- Chapter 2.1: The Bisection Method
- Chapter 2.2: Fixed-Point Iteration
- Chapter 2.3: Newton's Method and Its Extensions
- Chapter 2.4: Error Analysis for Iterative Methods
- Chapter 2.5: Accelerating Convergence
- Chapter 2.6: Zeros of Polynomials and Muller's Method
- Chapter 3.1: Interpolation and the Lagrange Polynomial
- Chapter 3.2: Data Approximation and Neville's Method
- Chapter 3.3: Divided Differences
- Chapter 3.4: Hermite Interpolation
- Chapter 3.5: Cubic Spline Interpolation
- Chapter 3.6: Parametric Curves
- Chapter 4.1: Numerical Differentiation
- Chapter 4.2: Richardson's Extrapolation
- Chapter 4.3: Elements of Numerical Integration
- Chapter 4.4: Composite Numerical Integration
- Chapter 4.5: Romberg Integration
- Chapter 4.6: Adaptive Quadrature Methods
- Chapter 4.7: Gaussian Quadrature
- Chapter 4.8: Multiple Integrals
- Chapter 4.9: Improper Integrals
- Chapter 5.1: The Elementary Theory of Initial-Value Problems
- Chapter 5.10: Stability
- Chapter 5.11: Stiff Differential Equations
- Chapter 5.2: Euler's Method
- Chapter 5.3: Higher-Order Taylor Methods
- Chapter 5.4: Runge-Kutta Methods
- Chapter 5.5: Error Control and the Runge-Kutta-Fehlberg Method
- Chapter 5.6: Multistep Methods
- Chapter 5.7: Variable Step-Size Multistep Methods
- Chapter 5.8: Extrapolation Methods
- Chapter 5.9: Higher-Order Equations and Systems of Differential Equations
- Chapter 6.1: Linear Systems of Equations
- Chapter 6.2: Pivoting Strategies
- Chapter 6.3: Linear Algebra and Matrix Inversion
- Chapter 6.4: The Determinant of a Matrix
- Chapter 6.5: Matrix Factorization
- Chapter 6.6: Special Types of Matrices
- Chapter 7.1: Norms of Vectors and Matrices
- Chapter 7.2: Eigenvalues and Eigenvectors
- Chapter 7.3: The Jacobi and Gauss-Siedel Iterative Techniques
- Chapter 7.4: Relaxation Techniques for Solving Linear Systems
- Chapter 7.5: Error Bounds and Iterative Refinement
- Chapter 7.6: The Conjugate Gradient Method
- Chapter 8.1: Discrete Least Squares Approximation
- Chapter 8.2: Orthogonal Polynomials and Least Squares Approximates
- Chapter 8.3: Chebyshev Polynomials and Economization of Power Series
- Chapter 8.4: Rational Function Approximation
- Chapter 8.5: Trigonometric Polynomial Approximation
- Chapter 8.6: Fast Fourier Transforms
- Chapter 9.1: Linear Algebra and Eigenvalues
- Chapter 9.2: Orthogonal Matrices and Similarity Transformations
- Chapter 9.3: The Power Method
- Chapter 9.4: Householder's Method
- Chapter 9.5: The QR Algorithm
- Chapter 9.6: Singular Value Decomposition
Numerical Analysis 9th Edition - Solutions by Chapter
Full solutions for Numerical Analysis | 9th Edition
peA) = det(A - AI) has peA) = zero matrix.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n - 1)/2 edges between nodes. A tree has only n - 1 edges and no closed loops.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·