 4.1: For 12, let r and s denote scalars and let v and w denote vectors i...
 4.2: For 12, let r and s denote scalars and let v and w denote vectors i...
 4.3: For 312, determine whether the given set (together with the usual o...
 4.4: For 312, determine whether the given set (together with the usual o...
 4.5: For 312, determine whether the given set (together with the usual o...
 4.6: For 312, determine whether the given set (together with the usual o...
 4.7: For 312, determine whether the given set (together with the usual o...
 4.8: For 312, determine whether the given set (together with the usual o...
 4.9: For 312, determine whether the given set (together with the usual o...
 4.10: For 312, determine whether the given set (together with the usual o...
 4.11: For 312, determine whether the given set (together with the usual o...
 4.12: For 312, determine whether the given set (together with the usual o...
 4.13: Let V = {(a1, a2) : a1, a2 R, a2 > 0}. Define addition and scalar m...
 4.14: Let V = {(a1, a2) : a1, a2 R, a2 > 0}. Define addition and scalar m...
 4.15: Show that {(1, 2), (3, 8)} is a linearly dependent set in the vecto...
 4.16: Show that{(1, 4), (2, 1)}is a basis for the vector space V in 13.
 4.17: What is the dimension of the subspace of P2(R) given by S = span{2 ...
 4.18: For 1823, decide (with justification) whether S is a subspace of V....
 4.19: For 1823, decide (with justification) whether S is a subspace of V.
 4.20: For 1823, decide (with justification) whether S is a subspace of V....
 4.21: For 1823, decide (with justification) whether S is a subspace of V.
 4.22: For 1823, decide (with justification) whether S is a subspace of V....
 4.23: For 1823, decide (with justification) whether S is a subspace of V....
 4.24: For 2431, decide (with justification) whether or not the given set ...
 4.25: For 2431, decide (with justification) whether or not the given set ...
 4.26: For 2431, decide (with justification) whether or not the given set ...
 4.27: For 2431, decide (with justification) whether or not the given set ...
 4.28: For 2431, decide (with justification) whether or not the given set ...
 4.29: For 2431, decide (with justification) whether or not the given set ...
 4.30: For 2431, decide (with justification) whether or not the given set ...
 4.31: For 2431, decide (with justification) whether or not the given set ...
 4.32: Prove that if {v1, v2, v3} is linearly independent and v4 is not in...
 4.33: Let A be an m n matrix. Show that the columns of A are linearly ind...
 4.34: Let S denote the set of all 4 4 skewsymmetric matrices. (a) Show t...
 4.35: Let S denote the set of all 4 4 matrices such that the entries in e...
 4.36: Let (V, +V , V ) and (W, +W , W ) be vector spaces and define V W =...
 4.37: Let (V, +V , V ) and (W, +W , W ) be vector spaces and define V W =...
 4.38: Prove that if A is a matrix whose nullspace and column space are th...
 4.39: Let B = b1 b2 . . . bn and C = c1 c2 ... cn . Prove that if all ent...
 4.40: For 4043, find a basis and the dimension for the row space, column ...
 4.41: For 4043, find a basis and the dimension for the row space, column ...
 4.42: For 4043, find a basis and the dimension for the row space, column ...
 4.43: For 4043, find a basis and the dimension for the row space, column ...
 4.44: State as many conditions as you can on an nn matrix A that are equi...
Solutions for Chapter 4: Vector Spaces
Full solutions for Differential Equations  4th Edition
ISBN: 9780321964670
Solutions for Chapter 4: Vector Spaces
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Since 44 problems in chapter 4: Vector Spaces have been answered, more than 20151 students have viewed full stepbystep solutions from this chapter. Differential Equations was written by and is associated to the ISBN: 9780321964670. Chapter 4: Vector Spaces includes 44 full stepbystep solutions. This textbook survival guide was created for the textbook: Differential Equations, edition: 4.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Orthogonal subspaces.
Every v in V is orthogonal to every w in W.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Reflection matrix (Householder) Q = I 2uuT.
Unit vector u is reflected to Qu = u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q1 = Q.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).