×
×

# Solutions for Chapter 1.6: Two-Variable Data

## Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition

ISBN: 9781559537636

Solutions for Chapter 1.6: Two-Variable Data

Solutions for Chapter 1.6
4 5 0 412 Reviews
21
3
##### ISBN: 9781559537636

Chapter 1.6: Two-Variable Data includes 12 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. Discovering Algebra: An Investigative Approach was written by and is associated to the ISBN: 9781559537636. This textbook survival guide was created for the textbook: Discovering Algebra: An Investigative Approach, edition: 2. Since 12 problems in chapter 1.6: Two-Variable Data have been answered, more than 9138 students have viewed full step-by-step solutions from this chapter.

Key Math Terms and definitions covered in this textbook
• Adjacency matrix of a graph.

Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

• Back substitution.

Upper triangular systems are solved in reverse order Xn to Xl.

• Cayley-Hamilton Theorem.

peA) = det(A - AI) has peA) = zero matrix.

A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

• Exponential eAt = I + At + (At)2 12! + ...

has derivative AeAt; eAt u(O) solves u' = Au.

• Hankel matrix H.

Constant along each antidiagonal; hij depends on i + j.

• Hermitian matrix A H = AT = A.

Complex analog a j i = aU of a symmetric matrix.

• Krylov subspace Kj(A, b).

The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

• lA-II = l/lAI and IATI = IAI.

The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

• Least squares solution X.

The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.

• Nullspace matrix N.

The columns of N are the n - r special solutions to As = O.

• Particular solution x p.

Any solution to Ax = b; often x p has free variables = o.

• Pivot columns of A.

Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

• Pseudoinverse A+ (Moore-Penrose inverse).

The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

• Random matrix rand(n) or randn(n).

MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

• Right inverse A+.

If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.

• Rotation matrix

R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().

• Stiffness matrix

If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.

• Toeplitz matrix.

Constant down each diagonal = time-invariant (shift-invariant) filter.

• Transpose matrix AT.

Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.

×