×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Solutions for Chapter Chapter 3: Linear and Quadratic Functions

Precalculus Enhanced with Graphing Utilities | 6th Edition | ISBN: 9780132854351 | Authors: Michael Sullivan

Full solutions for Precalculus Enhanced with Graphing Utilities | 6th Edition

ISBN: 9780132854351

Precalculus Enhanced with Graphing Utilities | 6th Edition | ISBN: 9780132854351 | Authors: Michael Sullivan

Solutions for Chapter Chapter 3: Linear and Quadratic Functions

Solutions for Chapter Chapter 3
4 5 0 306 Reviews
26
0
Textbook: Precalculus Enhanced with Graphing Utilities
Edition: 6
Author: Michael Sullivan
ISBN: 9780132854351

Since 29 problems in chapter Chapter 3: Linear and Quadratic Functions have been answered, more than 58928 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Precalculus Enhanced with Graphing Utilities was written by and is associated to the ISBN: 9780132854351. This textbook survival guide was created for the textbook: Precalculus Enhanced with Graphing Utilities, edition: 6. Chapter Chapter 3: Linear and Quadratic Functions includes 29 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
  • Adjacency matrix of a graph.

    Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

  • Associative Law (AB)C = A(BC).

    Parentheses can be removed to leave ABC.

  • Augmented matrix [A b].

    Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

  • Cayley-Hamilton Theorem.

    peA) = det(A - AI) has peA) = zero matrix.

  • Circulant matrix C.

    Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Elimination matrix = Elementary matrix Eij.

    The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.

  • Elimination.

    A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

  • Exponential eAt = I + At + (At)2 12! + ...

    has derivative AeAt; eAt u(O) solves u' = Au.

  • Factorization

    A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

  • Inverse matrix A-I.

    Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.

  • Jordan form 1 = M- 1 AM.

    If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

  • Krylov subspace Kj(A, b).

    The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

  • Orthogonal subspaces.

    Every v in V is orthogonal to every w in W.

  • Particular solution x p.

    Any solution to Ax = b; often x p has free variables = o.

  • Projection p = a(aTblaTa) onto the line through a.

    P = aaT laTa has rank l.

  • Row picture of Ax = b.

    Each equation gives a plane in Rn; the planes intersect at x.

  • Saddle point of I(x}, ... ,xn ).

    A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

  • Transpose matrix AT.

    Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.

  • Vector addition.

    v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.

×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide
×
Reset your password