- 6.6.1: Refer to Example 8. (a) Determine the matrix M in homogeneous fonn ...
- 6.6.2: Let the columns of 2 4 J 4 be the homogeneous form of the coordinat...
- 6.6.3: Let the columns of 2 4 3 4 ~] be the homogeneous fonn of the coordi...
- 6.6.4: Let the columns of 2 4 3 4 ~] be the homogeneous fonn of the coordi...
- 6.6.5: Let A be the 3 x 3 matrix in homogeneous form that reflects a plane...
- 6.6.6: LeI A be the 3 x 3 matrix in homogeneous fonllthat translates a pla...
- 6.6.7: Detem)jne the malrix in homogcncou$ fonn Ihal produced Ihe image of...
- 6.6.8: Determine lhe matrix in homogcneou$ fonn Ihal produced the image of...
- 6.6.9: Determine [he matrix in homogeneous fonn Ihal pro d uced the image ...
- 6.6.10: Determine [he matrix in homogeneous fonn Ihal pro d uced the image ...
- 6.6.11: Determine the matrix in homogeneous fonn that produced the image of...
- 6.6.12: Determine the matrix in hor.lOgeneous fonn that produced the image ...
- 6.6.13: The semicircle depicted as the original figure in Exercise [2 is to...
- 6.6.14: In calculus a surface of revol ution is generated by rotming a curv...
Solutions for Chapter 6.6: Introduction to Homogeneous Coordinates (Optional)
Full solutions for Elementary Linear Algebra with Applications | 9th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
A sequence of steps intended to approach the desired solution.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Every v in V is orthogonal to every w in W.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Solvable system Ax = b.
The right side b is in the column space of A.
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.