×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Already have an account? Login here
×
Reset your password

Solutions for Chapter 8.6: More on Errors; Stability

Elementary Differential Equations and Boundary Value Problems | 11th Edition | ISBN: 9781119256007 | Authors: Boyce, Diprima, Meade

Full solutions for Elementary Differential Equations and Boundary Value Problems | 11th Edition

ISBN: 9781119256007

Elementary Differential Equations and Boundary Value Problems | 11th Edition | ISBN: 9781119256007 | Authors: Boyce, Diprima, Meade

Solutions for Chapter 8.6: More on Errors; Stability

Since 6 problems in chapter 8.6: More on Errors; Stability have been answered, more than 13823 students have viewed full step-by-step solutions from this chapter. Elementary Differential Equations and Boundary Value Problems was written by and is associated to the ISBN: 9781119256007. Chapter 8.6: More on Errors; Stability includes 6 full step-by-step solutions. This textbook survival guide was created for the textbook: Elementary Differential Equations and Boundary Value Problems, edition: 11. This expansive textbook survival guide covers the following chapters and their solutions.

Key Math Terms and definitions covered in this textbook
  • Adjacency matrix of a graph.

    Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

  • Cayley-Hamilton Theorem.

    peA) = det(A - AI) has peA) = zero matrix.

  • Cholesky factorization

    A = CTC = (L.J]))(L.J]))T for positive definite A.

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Covariance matrix:E.

    When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

  • Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.

    Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

  • Elimination.

    A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

  • Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).

    Use AT for complex A.

  • Full row rank r = m.

    Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

  • Linear combination cv + d w or L C jV j.

    Vector addition and scalar multiplication.

  • Nilpotent matrix N.

    Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

  • Orthonormal vectors q 1 , ... , q n·

    Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

  • Particular solution x p.

    Any solution to Ax = b; often x p has free variables = o.

  • Positive definite matrix A.

    Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

  • Rank r (A)

    = number of pivots = dimension of column space = dimension of row space.

  • Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.

    Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

  • Reduced row echelon form R = rref(A).

    Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

  • Simplex method for linear programming.

    The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

  • Skew-symmetric matrix K.

    The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

  • Symmetric matrix A.

    The transpose is AT = A, and aU = a ji. A-I is also symmetric.