 3.1.1E: Compute the determinants in Exercises 1–8 using a cofactor expansio...
 3.1.2E: Compute the determinants using a cofactor expansion across the firs...
 3.1.3E: Compute the determinants using a cofactor expansion across the firs...
 3.1.4E: Compute the determinants using a cofactor expansion across the firs...
 3.1.5E: Compute the determinants using a cofactor expansion across the firs...
 3.1.6E: Compute the determinants using a cofactor expansion across the firs...
 3.1.7E: Compute the determinants in Exercises 1–8 using a cofactor expansio...
 3.1.8E: Compute the determinants using a cofactor expansion across the firs...
 3.1.9E: Compute the determinants in by colactor expansions. At each step, c...
 3.1.10E: Compute the determinants in by colactor expansions. At each step, c...
 3.1.11E: Compute the determinants in by colactor expansions. At each step, c...
 3.1.12E: Compute the determinants in by colactor expansions. At each step, c...
 3.1.13E: Compute the determinants in Exercises 9–14 by cofactor expansions. ...
 3.1.14E: Compute the determinants in Exercises 9–14 by cofactor expansions. ...
 3.1.15E: The expansion of a 3 × 3 determinant can be remembered by the follo...
 3.1.16E: The expansion of a 3 × 3 determinant can be remembered by the follo...
 3.1.17E: The expansion of a 3 × 3 determinant can be remembered by the follo...
 3.1.18E: The expansion of a 3 × 3 determinant can be remembered by the follo...
 3.1.19E: In Exercises 19–24, explore the effect of an elementary row operati...
 3.1.20E: In Exercise explore the effect of an elementary row operation on th...
 3.1.21E: In Exercises 19–24, explore the effect of an elementary row operati...
 3.1.22E: In Exercise explore the effect of an elementary row operation on th...
 3.1.23E: In Exercise explore the effect of an elementary row operation on th...
 3.1.24E: In Exercise explore the effect of an elementary row operation on th...
 3.1.25E: Compute the determinants of the elementary matrices given in Exerci...
 3.1.26E: Compute the determinants of the elementary matrices given in Exerci...
 3.1.27E: Compute the determinants of the elementary matrices given in Exerci...
 3.1.28E: Compute the determinants of the elementary matrices given in Exerci...
 3.1.29E: Compute the determinants of the elementary matrices given in Exerci...
 3.1.30E: Compute the determinants of the elementary matrices given in Exerci...
 3.1.31E: Use Exercises 25–28 to answer the questions in Exercises 31 and 32....
 3.1.32E: Use Exercises 25–28 to answer the questions in Exercises 31 and 32....
 3.1.33E: In Exercises 33–36, verify that det EA = (det E) (det A), where E i...
 3.1.34E: In Exercises 33–36, verify that det EA = (det E) (det A), where E i...
 3.1.35E: In Exercises 33–36, verify that det EA = (det E) (det A), where E i...
 3.1.36E: In Exercises 33–36, verify that det EA = (det E) (det A), where E i...
 3.1.37E: Let Write 5A. Is det 5A = 5 det A?
 3.1.38E: Let and let k be a scalar. Find a formula that relates det kA to k ...
 3.1.39E: a. An n × n determinant is defined by determinants of submatrices.b...
 3.1.40E: a. The cofactor expansion of det A down a column is the negative of...
 3.1.41E: Compute the area of the parallelogram determined by u, v, u + v, an...
 3.1.42E: where a, b, c are positive (for simplicity). Compute the area of th...
 3.1.46E: [M] How is det A–1 related to det A? Experiment with random n × n i...
Solutions for Chapter 3.1: Linear Algebra and Its Applications 5th Edition
Full solutions for Linear Algebra and Its Applications  5th Edition
ISBN: 9780321982384
Solutions for Chapter 3.1
Get Full SolutionsLinear Algebra and Its Applications was written by and is associated to the ISBN: 9780321982384. Chapter 3.1 includes 43 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 43 problems in chapter 3.1 have been answered, more than 40933 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications , edition: 5.

Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or  sign.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A  AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).