×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Already have an account? Login here
×
Reset your password

Textbooks / Math / Algebra 2, Student Edition (MERRILL ALGEBRA 2) 1

Algebra 2, Student Edition (MERRILL ALGEBRA 2) 1st Edition - Solutions by Chapter

Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition | ISBN: 9780078738302 | Authors: McGraw-Hill Education

Full solutions for Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition

ISBN: 9780078738302

Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition | ISBN: 9780078738302 | Authors: McGraw-Hill Education

Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition - Solutions by Chapter

Solutions by Chapter
4 5 0 390 Reviews
Textbook: Algebra 2, Student Edition (MERRILL ALGEBRA 2)
Edition: 1
Author: McGraw-Hill Education
ISBN: 9780078738302

Since problems from 115 chapters in Algebra 2, Student Edition (MERRILL ALGEBRA 2) have been answered, more than 194250 students have viewed full step-by-step answer. Algebra 2, Student Edition (MERRILL ALGEBRA 2) was written by and is associated to the ISBN: 9780078738302. This expansive textbook survival guide covers the following chapters: 115. This textbook survival guide was created for the textbook: Algebra 2, Student Edition (MERRILL ALGEBRA 2), edition: 1. The full step-by-step solution to problem in Algebra 2, Student Edition (MERRILL ALGEBRA 2) were answered by , our top Math solution expert on 01/30/18, 04:22PM.

Key Math Terms and definitions covered in this textbook
  • Condition number

    cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

  • Exponential eAt = I + At + (At)2 12! + ...

    has derivative AeAt; eAt u(O) solves u' = Au.

  • Hankel matrix H.

    Constant along each antidiagonal; hij depends on i + j.

  • Hypercube matrix pl.

    Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

  • Incidence matrix of a directed graph.

    The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .

  • Inverse matrix A-I.

    Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.

  • Jordan form 1 = M- 1 AM.

    If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

  • lA-II = l/lAI and IATI = IAI.

    The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

  • Least squares solution X.

    The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.

  • Lucas numbers

    Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

  • Norm

    IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

  • Projection matrix P onto subspace S.

    Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.

  • Pseudoinverse A+ (Moore-Penrose inverse).

    The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

  • Saddle point of I(x}, ... ,xn ).

    A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

  • Singular Value Decomposition

    (SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

  • Skew-symmetric matrix K.

    The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

  • Toeplitz matrix.

    Constant down each diagonal = time-invariant (shift-invariant) filter.

  • Triangle inequality II u + v II < II u II + II v II.

    For matrix norms II A + B II < II A II + II B II·

  • Tridiagonal matrix T: tij = 0 if Ii - j I > 1.

    T- 1 has rank 1 above and below diagonal.

  • Volume of box.

    The rows (or the columns) of A generate a box with volume I det(A) I.