 4.SE.1E: Mark each statement True or False. Justify each answer. (If true, c...
 4.SE.2E: Find a basis for the set of all 2 vectors of the form
 4.SE.3E: and Find an implicit description of W ; that is, find a set of one ...
 4.SE.4E: Explain what is wrong with the following discussion: Let Then {f, g...
 4.SE.5E: Consider the polynomials Use the method described in the proof of t...
 4.SE.6E: Suppose p1, p2, p3, p4 are specific polynomials that span a twodim...
 4.SE.7E: What would you have to know about the solution set of a homogeneous...
 4.SE.8E: Let H be an ndimensional subspace of an ndimensional vector space...
 4.SE.9E: Let be a linear transformation.a. What is the dimension of the rang...
 4.SE.10E: Let S be a maximal linearly independent subset of a vector space V....
 4.SE.11E: Let S be a finite minimal spanning set of a vector space V . That i...
 4.SE.12E: Exercises 12–17 develop properties of rank that are sometimes neede...
 4.SE.13E: Exercises 12–17 develop properties of rank that are sometimes neede...
 4.SE.14E: Exercises 12–17 develop properties of rank that are sometimes neede...
 4.SE.15E: Exercises 12–17 develop properties of rank that are sometimes neede...
 4.SE.16E: Exercises 12–17 develop properties of rank that are sometimes neede...
 4.SE.17E: Exercises 12–17 develop properties of rank that are sometimes neede...
 4.SE.18E: Suppose A is a 4 × 4 matrix and B is a 4 × 2 matrix, and let repres...
 4.SE.19E: Determine if the matrix pairs in Exercises 19–22 are controllable.
 4.SE.20E: Determine if the matrix pairs in Exercises 19–22 are controllable.
 4.SE.21E: Determine if the matrix pairs in Exercises 19–22 are controllable.
 4.SE.22E: Determine if the matrix pairs in Exercises 19–22 are controllable.
Solutions for Chapter 4.SE: Linear Algebra and Its Applications 5th Edition
Full solutions for Linear Algebra and Its Applications  5th Edition
ISBN: 9780321982384
Solutions for Chapter 4.SE
Get Full SolutionsChapter 4.SE includes 22 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra and Its Applications was written by and is associated to the ISBN: 9780321982384. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications , edition: 5. Since 22 problems in chapter 4.SE have been answered, more than 43517 students have viewed full stepbystep solutions from this chapter.

Cofactor Cij.
Remove row i and column j; multiply the determinant by (I)i + j •

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.