×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Solutions for Chapter 13-5: Law of Cosines

Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition | ISBN: 9780078738302 | Authors: McGraw-Hill Education

Full solutions for Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition

ISBN: 9780078738302

Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition | ISBN: 9780078738302 | Authors: McGraw-Hill Education

Solutions for Chapter 13-5: Law of Cosines

Solutions for Chapter 13-5
4 5 0 382 Reviews
12
1
Textbook: Algebra 2, Student Edition (MERRILL ALGEBRA 2)
Edition: 1
Author: McGraw-Hill Education
ISBN: 9780078738302

Chapter 13-5: Law of Cosines includes 50 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. Algebra 2, Student Edition (MERRILL ALGEBRA 2) was written by and is associated to the ISBN: 9780078738302. Since 50 problems in chapter 13-5: Law of Cosines have been answered, more than 60204 students have viewed full step-by-step solutions from this chapter. This textbook survival guide was created for the textbook: Algebra 2, Student Edition (MERRILL ALGEBRA 2), edition: 1.

Key Math Terms and definitions covered in this textbook
  • Big formula for n by n determinants.

    Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.

  • Block matrix.

    A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

  • Column space C (A) =

    space of all combinations of the columns of A.

  • Commuting matrices AB = BA.

    If diagonalizable, they share n eigenvectors.

  • Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.

    Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

  • Fourier matrix F.

    Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

  • Hermitian matrix A H = AT = A.

    Complex analog a j i = aU of a symmetric matrix.

  • Hypercube matrix pl.

    Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

  • Inverse matrix A-I.

    Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.

  • lA-II = l/lAI and IATI = IAI.

    The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

  • Linear transformation T.

    Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

  • Markov matrix M.

    All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

  • Matrix multiplication AB.

    The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

  • Multiplication Ax

    = Xl (column 1) + ... + xn(column n) = combination of columns.

  • Orthogonal subspaces.

    Every v in V is orthogonal to every w in W.

  • Orthonormal vectors q 1 , ... , q n·

    Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

  • Particular solution x p.

    Any solution to Ax = b; often x p has free variables = o.

  • Pseudoinverse A+ (Moore-Penrose inverse).

    The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

  • Random matrix rand(n) or randn(n).

    MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

  • Rotation matrix

    R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().

×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide
×
Reset your password