- Chapter 1.1: Review of Calculus
- Chapter 1.2: Round-off Errors and Computer Arithmetic
- Chapter 1.3: Algorithms and Convergence
- Chapter 10.1: Fixed Points for Functions of Several Variables
- Chapter 10.2: Newton's Method
- Chapter 10.3: Quasi-Newton Methods
- Chapter 10.4: Steepest Descent Techniques
- Chapter 10.5: Homotopy and Continuation Methods
- Chapter 11.1: The Linear Shooting Method
- Chapter 11.2: The Shooting Method for Nonlinear Problems
- Chapter 11.3: Finite-Difference Methods for Linear Problems
- Chapter 11.4: Finite-Difference Methods for Nonlinear Problems
- Chapter 11.5: The Rayleigh-Ritz Method
- Chapter 12.1: Elliptic Partial Differential Equation
- Chapter 12.2: Parabolic Partial Differential Equation
- Chapter 12.3: Hyperbolic Partial Differential Equations
- Chapter 12.4: An Introduction to the Finite-Element Method
- Chapter 2.1: The Bisection Method
- Chapter 2.2: Fixed-Point Iteration
- Chapter 2.3: Newton's Method and Its Extensions
- Chapter 2.4: Error Analysis for Iterative Methods
- Chapter 2.5: Accelerating Convergence
- Chapter 2.6: Zeros of Polynomials and Muller's Method
- Chapter 3.1: Interpolation and the Lagrange Polynomial
- Chapter 3.2: Data Approximation and Neville's Method
- Chapter 3.3: Divided Differences
- Chapter 3.4: Hermite Interpolation
- Chapter 3.5: Cubic Spline Interpolation1
- Chapter 3.6: Parametric Curves
- Chapter 4.1: Numerical Differentiation
- Chapter 4.10: Numerical Software and Chapter Review
- Chapter 4.2: Richardson's Extrapolation
- Chapter 4.3: Elements of Numerical Integration
- Chapter 4.4: Composite Numerical Integration
- Chapter 4.5: Romberg Integration
- Chapter 4.6: Adaptive Quadrature Methods
- Chapter 4.7: Gaussian Quadrature
- Chapter 4.8: Multiple Integrals
- Chapter 4.9: Improper Integrals
- Chapter 5.1: The Elementary Theory of Initial-Value Problems
- Chapter 5.10: Stability
- Chapter 5.11: Stiff Differential Equations
- Chapter 5.12: Numerical Software
- Chapter 5.2: Euler's Method
- Chapter 5.3: Higher-Order Taylor Methods
- Chapter 5.4: Runge-Kutta Methods
- Chapter 5.5: Error Control and the Runge-Kutta-Fehlberg Method
- Chapter 5.6: Multistep Method
- Chapter 5.7: Variable Step-Size Multistep Methods
- Chapter 5.8: Extrapolation Methods
- Chapter 5.9: Higher-Order Equations and Systems of Differential Equations
- Chapter 6.1: Linear Systems of Equations
- Chapter 6.2: Pivoting Strategies
- Chapter 6.3: Linear Algebra and Matrix Inversion
- Chapter 6.4: The Determinant of a Matrix
- Chapter 6.5: Matrix Factorization
- Chapter 6.6: Special Types of Matrices
- Chapter 6.7: Numerical Software
- Chapter 7.1: Norms of Vectors and Matrices
- Chapter 7.2: Eigenvalues and Eigenvectors
- Chapter 7.3: The Jacobi and Gauss-Siedel Iterative Techniques
- Chapter 7.4: Relaxation Techniques for Solving Linear Systems
- Chapter 7.5: Error Bounds and Iterative Refinement
- Chapter 7.6: The Conjugate Gradient Method
- Chapter 8.1: Discrete Least Squares Approximation
- Chapter 8.2: Orthogonal Polynomials and Least Squares Approximation
- Chapter 8.3: Chebyshev Polynomials and Economization of Power Series
- Chapter 8.4: Rational Function Approximation
- Chapter 8.5: Trigonometric Polynomial Approximation
- Chapter 8.6: Fast Fourier Transforms
- Chapter 9.1: Linear Algebra and Eigenvalues
- Chapter 9.2: Orthogonal Matrices and Similarity Transformations
- Chapter 9.3: The Power Method
- Chapter 9.4: Householder's Method
- Chapter 9.5: The QR Algorithm
- Chapter 9.6: Singular Value Decomposition
Numerical Analysis 10th Edition - Solutions by Chapter
Full solutions for Numerical Analysis | 10th Edition
Tv = Av + Vo = linear transformation plus shift.
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Outer product uv T
= column times row = rank one matrix.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
Special solutions to As = O.
One free variable is Si = 1, other free variables = o.
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.