×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Solutions for Chapter 4: Vector Spaces

Full solutions for Differential Equations | 4th Edition

ISBN: 9780321964670

Solutions for Chapter 4: Vector Spaces

Solutions for Chapter 4
4 5 0 286 Reviews
28
2
Textbook: Differential Equations
Edition: 4
Author: Stephen W. Goode
ISBN: 9780321964670

This expansive textbook survival guide covers the following chapters and their solutions. Since 44 problems in chapter 4: Vector Spaces have been answered, more than 20151 students have viewed full step-by-step solutions from this chapter. Differential Equations was written by and is associated to the ISBN: 9780321964670. Chapter 4: Vector Spaces includes 44 full step-by-step solutions. This textbook survival guide was created for the textbook: Differential Equations, edition: 4.

Key Math Terms and definitions covered in this textbook
  • Cayley-Hamilton Theorem.

    peA) = det(A - AI) has peA) = zero matrix.

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Factorization

    A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

  • Full column rank r = n.

    Independent columns, N(A) = {O}, no free variables.

  • Hermitian matrix A H = AT = A.

    Complex analog a j i = aU of a symmetric matrix.

  • Krylov subspace Kj(A, b).

    The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

  • Markov matrix M.

    All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

  • Minimal polynomial of A.

    The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).

  • Norm

    IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

  • Nullspace N (A)

    = All solutions to Ax = O. Dimension n - r = (# columns) - rank.

  • Orthogonal subspaces.

    Every v in V is orthogonal to every w in W.

  • Pivot.

    The diagonal entry (first nonzero) at the time when a row is used in elimination.

  • Positive definite matrix A.

    Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

  • Random matrix rand(n) or randn(n).

    MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

  • Rank r (A)

    = number of pivots = dimension of column space = dimension of row space.

  • Reflection matrix (Householder) Q = I -2uuT.

    Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.

  • Simplex method for linear programming.

    The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

  • Skew-symmetric matrix K.

    The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

  • Symmetric factorizations A = LDLT and A = QAQT.

    Signs in A = signs in D.

  • Wavelets Wjk(t).

    Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).

×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide
×
Reset your password