×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Already have an account? Login here
×
Reset your password

Solutions for Chapter 7.5: Defining the AbsoluteValue Function

Discovering Algebra: An Investigative Approach | 2nd Edition | ISBN: 9781559537636 | Authors: Jerald Murdock, Ellen Kamischke, Eric Kamischke

Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition

ISBN: 9781559537636

Discovering Algebra: An Investigative Approach | 2nd Edition | ISBN: 9781559537636 | Authors: Jerald Murdock, Ellen Kamischke, Eric Kamischke

Solutions for Chapter 7.5: Defining the AbsoluteValue Function

This textbook survival guide was created for the textbook: Discovering Algebra: An Investigative Approach, edition: 2. Since 16 problems in chapter 7.5: Defining the AbsoluteValue Function have been answered, more than 9807 students have viewed full step-by-step solutions from this chapter. Discovering Algebra: An Investigative Approach was written by and is associated to the ISBN: 9781559537636. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 7.5: Defining the AbsoluteValue Function includes 16 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
  • Big formula for n by n determinants.

    Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.

  • Block matrix.

    A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

  • Covariance matrix:E.

    When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

  • Echelon matrix U.

    The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

  • Free columns of A.

    Columns without pivots; these are combinations of earlier columns.

  • Hilbert matrix hilb(n).

    Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.

  • Hypercube matrix pl.

    Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

  • Jordan form 1 = M- 1 AM.

    If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

  • Kirchhoff's Laws.

    Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

  • Kronecker product (tensor product) A ® B.

    Blocks aij B, eigenvalues Ap(A)Aq(B).

  • Least squares solution X.

    The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.

  • Minimal polynomial of A.

    The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).

  • Polar decomposition A = Q H.

    Orthogonal Q times positive (semi)definite H.

  • Projection matrix P onto subspace S.

    Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.

  • Simplex method for linear programming.

    The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

  • Skew-symmetric matrix K.

    The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

  • Stiffness matrix

    If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.

  • Triangle inequality II u + v II < II u II + II v II.

    For matrix norms II A + B II < II A II + II B II·

  • Unitary matrix UH = U T = U-I.

    Orthonormal columns (complex analog of Q).

  • Vector space V.

    Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.