- 6.4.1: Use Definition 6.15 to compute the determinants ofthe following mat...
- 6.4.2: Use Definition 6.15 to compute the determinants ofthe following mat...
- 6.4.3: Repeat Exercise 1 using the method of Example 2
- 6.4.4: Repeat Exercise 2 using the method of Example 2.
- 6.4.5: Find all values ofa that make the following matrix singular. A = 1 ...
- 6.4.6: Find all values ofa that make the following matrix singular. A = 1 ...
- 6.4.7: Find all values ofa so that the following linear system has no solu...
- 6.4.8: Find all values ofa so that the following linear system has an infi...
- 6.4.9: The rotation matrix R = cosy sinf sin0 cost 10. applied to the vect...
- 6.4.10: The rotation matrix for a 3 dimensional counterclockwise rotation t...
- 6.4.11: The chemical formula x\\C(i(OH)2] +X2[HNO3] > xj\CA(N03)2] T^4!H2C]...
- 6.4.12: Use mathematical induction to show that when n > 1, the evaluation ...
- 6.4.13: Let A be a 3 x 3 matrix. Show that if A is the matrix obtained from...
- 6.4.14: Prove that A 6 is nonsingular if and only if both A and B are nonsi...
- 6.4.15: The solution by Cramer's rule to the linear system a\\X\ + 012*2 + ...
- 6.4.16: Generalize Cramer's rule to an n x n linear system. Use the result ...
Solutions for Chapter 6.4: The Determinant of a Matrix
Full solutions for Numerical Analysis | 10th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Upper triangular systems are solved in reverse order Xn to Xl.
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
A = CTC = (L.J]))(L.J]))T for positive definite A.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Dimension of vector space
dim(V) = number of vectors in any basis for V.
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
A symmetric matrix with eigenvalues of both signs (+ and - ).
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
= Xl (column 1) + ... + xn(column n) = combination of columns.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.