- 1.6.1: Draw and label a coordinate plane so that the x-axis extends from 9...
- 1.6.2: Sketch a coordinate plane. Label the axes and each of the four quad...
- 1.6.3: Use your calculator to practice identifying coordinates. The progra...
- 1.6.4: This graph pictures a walkers distance from a stationary motion sen...
- 1.6.5: Look at this scatter plot.a. Name the (x, y) coordinates of each po...
- 1.6.6: Write a paragraph explaining how to make a calculator scatter plot,...
- 1.6.7: APPLICATION The graph below is created by connecting the points in ...
- 1.6.8: The data in this table show the average miles per gallon (mpg) for ...
- 1.6.9: The graph at right is a hexagon whose vertices are six ordered pair...
- 1.6.10: Xaviers dad braked suddenly to avoid hitting a squirrel as he drove...
- 1.6.11: Create a data set with the specified number of items and the five-n...
- 1.6.12: The table gives results for eighth-grade students in the 2003 Trend...
Solutions for Chapter 1.6: Two-Variable Data
Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Upper triangular systems are solved in reverse order Xn to Xl.
peA) = det(A - AI) has peA) = zero matrix.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Nullspace matrix N.
The columns of N are the n - r special solutions to As = O.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.