×
×

# Solutions for Chapter 7.4: The Determinant of a Square Matrix

## Full solutions for College Algebra | 8th Edition

ISBN: 9781439048696

Solutions for Chapter 7.4: The Determinant of a Square Matrix

Solutions for Chapter 7.4
4 5 0 318 Reviews
27
2
##### ISBN: 9781439048696

Since 62 problems in chapter 7.4: The Determinant of a Square Matrix have been answered, more than 30924 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: College Algebra , edition: 8. Chapter 7.4: The Determinant of a Square Matrix includes 62 full step-by-step solutions. College Algebra was written by and is associated to the ISBN: 9781439048696.

Key Math Terms and definitions covered in this textbook
• Characteristic equation det(A - AI) = O.

The n roots are the eigenvalues of A.

• Cholesky factorization

A = CTC = (L.J]))(L.J]))T for positive definite A.

• Complete solution x = x p + Xn to Ax = b.

(Particular x p) + (x n in nullspace).

• Cramer's Rule for Ax = b.

B j has b replacing column j of A; x j = det B j I det A

• Cross product u xv in R3:

Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

• Echelon matrix U.

The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

• Fourier matrix F.

Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

• Free columns of A.

Columns without pivots; these are combinations of earlier columns.

• Gram-Schmidt orthogonalization A = QR.

Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

• Kirchhoff's Laws.

Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

• Linearly dependent VI, ... , Vn.

A combination other than all Ci = 0 gives L Ci Vi = O.

• Normal matrix.

If N NT = NT N, then N has orthonormal (complex) eigenvectors.

• Particular solution x p.

Any solution to Ax = b; often x p has free variables = o.

• Plane (or hyperplane) in Rn.

Vectors x with aT x = O. Plane is perpendicular to a =1= O.

• Schwarz inequality

Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

• Similar matrices A and B.

Every B = M-I AM has the same eigenvalues as A.

• Simplex method for linear programming.

The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

• Singular Value Decomposition

(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

• Skew-symmetric matrix K.

The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

• Vector space V.

Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

×