 3.3.1: Determine whether the following vectors are linearly independent in...
 3.3.2: Determine whether the following vectors are linearly independent in...
 3.3.3: For each of the sets of vectors in Exercise 2, describe geometrical...
 3.3.4: Determine whether the following vectors are linearly independent in...
 3.3.5: Let x1, x2, ... , xk be linearly independent vectors in a vector sp...
 3.3.6: Let x1, x2, and x3 be linearly independent vectors in Rn and let y1...
 3.3.7: Let x1, x2, and x3 be linearly independent vectors in Rn and let y1...
 3.3.8: Determine whether the following vectors are linearly independent in...
 3.3.9: For each of the following, show that the given vectors are linearly...
 3.3.10: Determine whether the vectors cos x, 1, and sin2 (x/2) are linearly...
 3.3.11: Consider the vectors cos(x + ) and sin x in C[, ]. For what values ...
 3.3.12: Given the functions 2x and x, show that (a) these two vectors are...
 3.3.13: Prove that any finite set of vectors that contains the zero vector ...
 3.3.14: Let v1, and v2 be two vectors in a vector space V. Show that v1 and...
 3.3.15: Prove that any nonempty subset of a linearly independent set of vec...
 3.3.16: Let A be an mn matrix. Show that if A has linearly independent colu...
 3.3.17: Let x1, ... , xk be linearly independent vectors in Rn, and let A b...
 3.3.18: Let A be a 3 3 matrix and let x1, x2, x3 be vectors in R3. Show tha...
 3.3.19: Let {v1, ... , vn} be a spanning set for the vector space V, and le...
 3.3.20: Let v1, v2, ... , vn be linearly independent vectors in a vector sp...
Solutions for Chapter 3.3: Linear Independence
Full solutions for Linear Algebra with Applications  9th Edition
ISBN: 9780321962218
Solutions for Chapter 3.3: Linear Independence
Get Full SolutionsThis textbook survival guide was created for the textbook: Linear Algebra with Applications, edition: 9. Linear Algebra with Applications was written by and is associated to the ISBN: 9780321962218. Chapter 3.3: Linear Independence includes 20 full stepbystep solutions. Since 20 problems in chapter 3.3: Linear Independence have been answered, more than 11979 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions.

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Cofactor Cij.
Remove row i and column j; multiply the determinant by (I)i + j •

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.