×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Solutions for Chapter 5.3: Exponential Functions

Precalculus Enhanced with Graphing Utilities | 6th Edition | ISBN: 9780132854351 | Authors: Michael Sullivan

Full solutions for Precalculus Enhanced with Graphing Utilities | 6th Edition

ISBN: 9780132854351

Precalculus Enhanced with Graphing Utilities | 6th Edition | ISBN: 9780132854351 | Authors: Michael Sullivan

Solutions for Chapter 5.3: Exponential Functions

Solutions for Chapter 5.3
4 5 0 297 Reviews
20
3
Textbook: Precalculus Enhanced with Graphing Utilities
Edition: 6
Author: Michael Sullivan
ISBN: 9780132854351

Since 129 problems in chapter 5.3: Exponential Functions have been answered, more than 36225 students have viewed full step-by-step solutions from this chapter. Chapter 5.3: Exponential Functions includes 129 full step-by-step solutions. This textbook survival guide was created for the textbook: Precalculus Enhanced with Graphing Utilities, edition: 6. This expansive textbook survival guide covers the following chapters and their solutions. Precalculus Enhanced with Graphing Utilities was written by and is associated to the ISBN: 9780132854351.

Key Math Terms and definitions covered in this textbook
  • Cayley-Hamilton Theorem.

    peA) = det(A - AI) has peA) = zero matrix.

  • Cofactor Cij.

    Remove row i and column j; multiply the determinant by (-I)i + j •

  • Companion matrix.

    Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).

  • Diagonalization

    A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.

  • Dimension of vector space

    dim(V) = number of vectors in any basis for V.

  • Fibonacci numbers

    0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].

  • Jordan form 1 = M- 1 AM.

    If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

  • Krylov subspace Kj(A, b).

    The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

  • lA-II = l/lAI and IATI = IAI.

    The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

  • Linear transformation T.

    Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

  • Lucas numbers

    Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

  • Multiplier eij.

    The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

  • Orthogonal matrix Q.

    Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

  • Orthonormal vectors q 1 , ... , q n·

    Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

  • Pseudoinverse A+ (Moore-Penrose inverse).

    The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

  • Random matrix rand(n) or randn(n).

    MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

  • Saddle point of I(x}, ... ,xn ).

    A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

  • Simplex method for linear programming.

    The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

  • Trace of A

    = sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

  • Vector space V.

    Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide
×
Reset your password