- 7.5.1: Find the value of each expression without using a calculator. Check...
- 7.5.2: Find the x-values that satisfy each statement. a. | x | = 10 b. | x...
- 7.5.3: Evaluate both sides of each statement to determine whether to repla...
- 7.5.4: Consider the functions f (x) = 3x 5 and g(x) = | x 3 |. Find each v...
- 7.5.5: Plot the function y = | x | (or Y1 = abs(x)). Use a friendly window...
- 7.5.6: Create this graph on graph paper: When x 0, graph the line y = x. W...
- 7.5.7: Solve this system of equations:
- 7.5.8: Solve each equation for x. a. | x | = 12 b. 10 = | x | + 4 c. 10 = ...
- 7.5.9: Write specific directions for the walk represented by this calculat...
- 7.5.10: The graph in Example B, part a, shows two solutions for x. a. Repla...
- 7.5.11: In 11ad, identify which function, f (x), g(x), or h(x), is used in ...
- 7.5.12: The solutions to the equation | x 4 | + 3 = 17 are 10 and 18. a. Ex...
- 7.5.13: APPLICATION The table shows the weights of fish caught by wildlife ...
- 7.5.14: Solve each system of equations using the method of your choice. For...
- 7.5.15: Solve each system of equations using the method of your choice. For...
- 7.5.16: Solve each inequality and graph the solution on a number line. a. 2...
Solutions for Chapter 7.5: Defining the AbsoluteValue Function
Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
Free columns of A.
Columns without pivots; these are combinations of earlier columns.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.