×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Solutions for Chapter 3.3: Quadratic Functions and Their Properties

Precalculus Enhanced with Graphing Utilities | 6th Edition | ISBN: 9780132854351 | Authors: Michael Sullivan

Full solutions for Precalculus Enhanced with Graphing Utilities | 6th Edition

ISBN: 9780132854351

Precalculus Enhanced with Graphing Utilities | 6th Edition | ISBN: 9780132854351 | Authors: Michael Sullivan

Solutions for Chapter 3.3: Quadratic Functions and Their Properties

Solutions for Chapter 3.3
4 5 0 363 Reviews
21
0
Textbook: Precalculus Enhanced with Graphing Utilities
Edition: 6
Author: Michael Sullivan
ISBN: 9780132854351

Precalculus Enhanced with Graphing Utilities was written by and is associated to the ISBN: 9780132854351. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 3.3: Quadratic Functions and Their Properties includes 104 full step-by-step solutions. Since 104 problems in chapter 3.3: Quadratic Functions and Their Properties have been answered, more than 53504 students have viewed full step-by-step solutions from this chapter. This textbook survival guide was created for the textbook: Precalculus Enhanced with Graphing Utilities, edition: 6.

Key Math Terms and definitions covered in this textbook
  • Affine transformation

    Tv = Av + Vo = linear transformation plus shift.

  • Change of basis matrix M.

    The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

  • Complete solution x = x p + Xn to Ax = b.

    (Particular x p) + (x n in nullspace).

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Diagonalization

    A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.

  • Dimension of vector space

    dim(V) = number of vectors in any basis for V.

  • Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.

    Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

  • Elimination.

    A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

  • Ellipse (or ellipsoid) x T Ax = 1.

    A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad

  • Hankel matrix H.

    Constant along each antidiagonal; hij depends on i + j.

  • Independent vectors VI, .. " vk.

    No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

  • Linearly dependent VI, ... , Vn.

    A combination other than all Ci = 0 gives L Ci Vi = O.

  • Nullspace matrix N.

    The columns of N are the n - r special solutions to As = O.

  • Orthogonal matrix Q.

    Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

  • Outer product uv T

    = column times row = rank one matrix.

  • Particular solution x p.

    Any solution to Ax = b; often x p has free variables = o.

  • Pseudoinverse A+ (Moore-Penrose inverse).

    The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

  • Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.

    Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

  • Simplex method for linear programming.

    The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

  • Spanning set.

    Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide
×
Reset your password