×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Already have an account? Login here
×
Reset your password

Solutions for Chapter 7.3: Square Root Functions and Inequalities

California Algebra 2: Concepts, Skills, and Problem Solving | 1st Edition | ISBN: 9780078778568 | Authors: Berchie Holliday

Full solutions for California Algebra 2: Concepts, Skills, and Problem Solving | 1st Edition

ISBN: 9780078778568

California Algebra 2: Concepts, Skills, and Problem Solving | 1st Edition | ISBN: 9780078778568 | Authors: Berchie Holliday

Solutions for Chapter 7.3: Square Root Functions and Inequalities

Solutions for Chapter 7.3
4 5 0 330 Reviews
22
1
Textbook: California Algebra 2: Concepts, Skills, and Problem Solving
Edition: 1
Author: Berchie Holliday
ISBN: 9780078778568

This textbook survival guide was created for the textbook: California Algebra 2: Concepts, Skills, and Problem Solving, edition: 1. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 7.3: Square Root Functions and Inequalities includes 47 full step-by-step solutions. California Algebra 2: Concepts, Skills, and Problem Solving was written by and is associated to the ISBN: 9780078778568. Since 47 problems in chapter 7.3: Square Root Functions and Inequalities have been answered, more than 94843 students have viewed full step-by-step solutions from this chapter.

Key Math Terms and definitions covered in this textbook
  • Associative Law (AB)C = A(BC).

    Parentheses can be removed to leave ABC.

  • Characteristic equation det(A - AI) = O.

    The n roots are the eigenvalues of A.

  • Complete solution x = x p + Xn to Ax = b.

    (Particular x p) + (x n in nullspace).

  • Complex conjugate

    z = a - ib for any complex number z = a + ib. Then zz = Iz12.

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Cyclic shift

    S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

  • Echelon matrix U.

    The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

  • Ellipse (or ellipsoid) x T Ax = 1.

    A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad

  • Incidence matrix of a directed graph.

    The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .

  • Independent vectors VI, .. " vk.

    No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

  • Kirchhoff's Laws.

    Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

  • Linear combination cv + d w or L C jV j.

    Vector addition and scalar multiplication.

  • Minimal polynomial of A.

    The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).

  • Outer product uv T

    = column times row = rank one matrix.

  • Pivot columns of A.

    Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

  • Plane (or hyperplane) in Rn.

    Vectors x with aT x = O. Plane is perpendicular to a =1= O.

  • Right inverse A+.

    If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.

  • Rotation matrix

    R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().

  • Simplex method for linear programming.

    The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

  • Triangle inequality II u + v II < II u II + II v II.

    For matrix norms II A + B II < II A II + II B II·