 2.1.1: Let A = 324 1 2 3 232 (a) Find the values of det(M21), det(M22), an...
 2.1.2: Use determinants to determine whether the following 2 2 matrices ar...
 2.1.3: Evaluate the following determinants: (a) 3 5 2 3 (b) 5 2 8 4 (c) 31...
 2.1.4: Evaluate the following determinants by inspection: (a) 3 5 2 4 (b) ...
 2.1.5: Evaluate the following determinant. Write your answer as a polynomi...
 2.1.6: Find all values of for which the following determinant will equal 0...
 2.1.7: Let A be a 3 3 matrix with a11 = 0 and a21 = 0. Show that A is row ...
 2.1.8: Write out the details of the proof of Theorem 2.1.3.
 2.1.9: Prove that if a row or a column of an n n matrix A consists entirel...
 2.1.10: Use mathematical induction to prove that if A is an (n + 1) (n + 1)...
 2.1.11: Let A and B be 2 2 matrices. (a) Does det(A + B) = det(A) + det(B)?...
 2.1.12: Let A and B be 2 2 matrices and let C = a11 a12 b21 b22 , D = b11 b...
 2.1.13: Let A be a symmetric tridiagonal matrix (i.e., A is symmetric and a...
Solutions for Chapter 2.1: The Determinant of a Matrix
Full solutions for Linear Algebra with Applications  9th Edition
ISBN: 9780321962218
Solutions for Chapter 2.1: The Determinant of a Matrix
Get Full SolutionsLinear Algebra with Applications was written by and is associated to the ISBN: 9780321962218. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 2.1: The Determinant of a Matrix includes 13 full stepbystep solutions. This textbook survival guide was created for the textbook: Linear Algebra with Applications, edition: 9. Since 13 problems in chapter 2.1: The Determinant of a Matrix have been answered, more than 10952 students have viewed full stepbystep solutions from this chapter.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Companion matrix.
Put CI, ... ,Cn in row n and put n  1 ones just above the main diagonal. Then det(A  AI) = ±(CI + c2A + C3A 2 + .•. + cnA nl  An).

Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).