- Chapter 1.1: Review of Calculus
- Chapter 1.2: Round-off Errors and Computer Arithmetic
- Chapter 1.3: Algorithms and Convergence
- Chapter 10.1: Fixed Points for Functions of Several Variables
- Chapter 10.2: Newton's Method
- Chapter 10.3: Quasi-Newton Methods
- Chapter 10.4: Steepest Descent Techniques
- Chapter 10.5: Homotopy and Continuation Methods
- Chapter 11.1: The Linear Shooting Method
- Chapter 11.2: The Shooting Method for Nonlinear Problems
- Chapter 11.3: Finite-Difference Methods for Linear Problems
- Chapter 11.4: Finite-Difference Methods for Nonlinear Problems
- Chapter 11.5: The Rayleigh-Ritz Method
- Chapter 12.1: Elliptic Partial Differential Equations
- Chapter 12.2: Parabolic Partial Differential Equations
- Chapter 12.3: Hyperbolic Partial Differential Equations
- Chapter 12.4: An Introduction to the Finite-Element Method
- Chapter 2.1: The Bisection Method
- Chapter 2.2: Fixed-Point Iteration
- Chapter 2.3: Newton's Method and Its Extensions
- Chapter 2.4: Error Analysis for Iterative Methods
- Chapter 2.5: Accelerating Convergence
- Chapter 2.6: Zeros of Polynomials and Muller's Method
- Chapter 3.1: Interpolation and the Lagrange Polynomial
- Chapter 3.2: Data Approximation and Neville's Method
- Chapter 3.3: Divided Differences
- Chapter 3.4: Hermite Interpolation
- Chapter 3.5: Cubic Spline Interpolation
- Chapter 3.6: Parametric Curves
- Chapter 4.1: Numerical Differentiation
- Chapter 4.2: Richardson's Extrapolation
- Chapter 4.3: Elements of Numerical Integration
- Chapter 4.4: Composite Numerical Integration
- Chapter 4.5: Romberg Integration
- Chapter 4.6: Adaptive Quadrature Methods
- Chapter 4.7: Gaussian Quadrature
- Chapter 4.8: Multiple Integrals
- Chapter 4.9: Improper Integrals
- Chapter 5.1: The Elementary Theory of Initial-Value Problems
- Chapter 5.10: Stability
- Chapter 5.11: Stiff Differential Equations
- Chapter 5.2: Euler's Method
- Chapter 5.3: Higher-Order Taylor Methods
- Chapter 5.4: Runge-Kutta Methods
- Chapter 5.5: Error Control and the Runge-Kutta-Fehlberg Method
- Chapter 5.6: Multistep Methods
- Chapter 5.7: Variable Step-Size Multistep Methods
- Chapter 5.8: Extrapolation Methods
- Chapter 5.9: Higher-Order Equations and Systems of Differential Equations
- Chapter 6.1: Linear Systems of Equations
- Chapter 6.2: Pivoting Strategies
- Chapter 6.3: Linear Algebra and Matrix Inversion
- Chapter 6.4: The Determinant of a Matrix
- Chapter 6.5: Matrix Factorization
- Chapter 6.6: Special Types of Matrices
- Chapter 7.1: Norms of Vectors and Matrices
- Chapter 7.2: Eigenvalues and Eigenvectors
- Chapter 7.3: The Jacobi and Gauss-Siedel Iterative Techniques
- Chapter 7.4: Relaxation Techniques for Solving Linear Systems
- Chapter 7.5: Error Bounds and Iterative Refinement
- Chapter 7.6: The Conjugate Gradient Method
- Chapter 8.1: Discrete Least Squares Approximation
- Chapter 8.2: Orthogonal Polynomials and Least Squares Approximates
- Chapter 8.3: Chebyshev Polynomials and Economization of Power Series
- Chapter 8.4: Rational Function Approximation
- Chapter 8.5: Trigonometric Polynomial Approximation
- Chapter 8.6: Fast Fourier Transforms
- Chapter 9.1: Linear Algebra and Eigenvalues
- Chapter 9.2: Orthogonal Matrices and Similarity Transformations
- Chapter 9.3: The Power Method
- Chapter 9.4: Householder's Method
- Chapter 9.5: The QR Algorithm
- Chapter 9.6: Singular Value Decomposition
Numerical Analysis 9th Edition - Solutions by Chapter
Full solutions for Numerical Analysis | 9th Edition
Upper triangular systems are solved in reverse order Xn to Xl.
peA) = det(A - AI) has peA) = zero matrix.
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Invert A by row operations on [A I] to reach [I A-I].
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Every v in V is orthogonal to every w in W.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or email@example.com
Forgot password? Reset it here