×
Log in to StudySoup
Get Full Access to Algebra - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Algebra - Textbook Survival Guide

Already have an account? Login here
×
Reset your password

Solutions for Chapter 3.1: DETERMINANTS

Elementary Linear Algebra: A Matrix Approach | 2nd Edition | ISBN: 9780131871410 | Authors: Lawrence E. Spence

Full solutions for Elementary Linear Algebra: A Matrix Approach | 2nd Edition

ISBN: 9780131871410

Elementary Linear Algebra: A Matrix Approach | 2nd Edition | ISBN: 9780131871410 | Authors: Lawrence E. Spence

Solutions for Chapter 3.1: DETERMINANTS

Solutions for Chapter 3.1
4 5 0 428 Reviews
30
1
Textbook: Elementary Linear Algebra: A Matrix Approach
Edition: 2
Author: Lawrence E. Spence
ISBN: 9780131871410

This textbook survival guide was created for the textbook: Elementary Linear Algebra: A Matrix Approach, edition: 2. Elementary Linear Algebra: A Matrix Approach was written by and is associated to the ISBN: 9780131871410. This expansive textbook survival guide covers the following chapters and their solutions. Since 84 problems in chapter 3.1: DETERMINANTS have been answered, more than 34993 students have viewed full step-by-step solutions from this chapter. Chapter 3.1: DETERMINANTS includes 84 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
  • Augmented matrix [A b].

    Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

  • Cayley-Hamilton Theorem.

    peA) = det(A - AI) has peA) = zero matrix.

  • Cholesky factorization

    A = CTC = (L.J]))(L.J]))T for positive definite A.

  • Cofactor Cij.

    Remove row i and column j; multiply the determinant by (-I)i + j •

  • Column picture of Ax = b.

    The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

  • Complete solution x = x p + Xn to Ax = b.

    (Particular x p) + (x n in nullspace).

  • Condition number

    cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Dimension of vector space

    dim(V) = number of vectors in any basis for V.

  • Eigenvalue A and eigenvector x.

    Ax = AX with x#-O so det(A - AI) = o.

  • Hilbert matrix hilb(n).

    Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.

  • Kirchhoff's Laws.

    Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

  • Kronecker product (tensor product) A ® B.

    Blocks aij B, eigenvalues Ap(A)Aq(B).

  • Matrix multiplication AB.

    The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

  • Pivot.

    The diagonal entry (first nonzero) at the time when a row is used in elimination.

  • Pseudoinverse A+ (Moore-Penrose inverse).

    The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

  • Reduced row echelon form R = rref(A).

    Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

  • Rotation matrix

    R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().

  • Saddle point of I(x}, ... ,xn ).

    A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

  • Subspace S of V.

    Any vector space inside V, including V and Z = {zero vector only}.