×
×

# Solutions for Chapter 7.3: The Law of Cosines

## Full solutions for Trigonometry | 10th Edition

ISBN: 9780321671776

Solutions for Chapter 7.3: The Law of Cosines

Solutions for Chapter 7.3
4 5 0 393 Reviews
29
1
##### ISBN: 9780321671776

Trigonometry was written by and is associated to the ISBN: 9780321671776. Chapter 7.3: The Law of Cosines includes 80 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 80 problems in chapter 7.3: The Law of Cosines have been answered, more than 35645 students have viewed full step-by-step solutions from this chapter. This textbook survival guide was created for the textbook: Trigonometry, edition: 10.

Key Math Terms and definitions covered in this textbook
• Affine transformation

Tv = Av + Vo = linear transformation plus shift.

• Cofactor Cij.

Remove row i and column j; multiply the determinant by (-I)i + j •

• Condition number

cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

• Covariance matrix:E.

When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

• Diagonalization

A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.

• Echelon matrix U.

The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

• Exponential eAt = I + At + (At)2 12! + ...

has derivative AeAt; eAt u(O) solves u' = Au.

• Fourier matrix F.

Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

• Inverse matrix A-I.

Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.

• lA-II = l/lAI and IATI = IAI.

The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

• Linearly dependent VI, ... , Vn.

A combination other than all Ci = 0 gives L Ci Vi = O.

• Markov matrix M.

All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

• Orthogonal subspaces.

Every v in V is orthogonal to every w in W.

• Pivot.

The diagonal entry (first nonzero) at the time when a row is used in elimination.

• Projection p = a(aTblaTa) onto the line through a.

P = aaT laTa has rank l.

• Reflection matrix (Householder) Q = I -2uuT.

Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.

• Singular Value Decomposition

(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

• Trace of A

= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

• Triangle inequality II u + v II < II u II + II v II.

For matrix norms II A + B II < II A II + II B II·

• Vector space V.

Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

×