 4.9.1: Use matrix multiplication to find the reflection of (1, 2) about th...
 4.9.2: Use matrix multiplication to find the reflection of(a, b) about the...
 4.9.3: Use matrix multiplication to find the reflection of (2, 5, 3) about...
 4.9.4: Use matrix multiplication to find the reflection of (a, b, c) about...
 4.9.5: Use matrix multiplication to find the orthogonal projection of (2, ...
 4.9.6: Use matrix multiplication to find the orthogonal projection of (a, ...
 4.9.7: Use matrix multiplication to find the orthogonal projection of (2, ...
 4.9.8: Use matrix multiplication to find the orthogonal projection of (a, ...
 4.9.9: Use matrix multiplication to find the image of the vector (3, 4) wh...
 4.9.10: Use matrix multiplication to find the image of the nonzero vector v...
 4.9.11: Use matrix multiplication to find the image of the vector (2, 1, 2)...
 4.9.12: Use matrix multiplication to find the image of the vector (2, 1, 2)...
 4.9.13: (a) Use matrix multiplication to find the contraction of (1, 2) wit...
 4.9.14: (a) Use matrix multiplication to find the contraction of (a, b) wit...
 4.9.15: (a) Use matrix multiplication to find the contraction of (2, 1, 3) ...
 4.9.16: (a) Use matrix multiplication to find the contraction of (a, b, c) ...
 4.9.17: (a) Use matrix multiplication to find the compression of (1, 2) in ...
 4.9.18: (a) Use matrix multiplication to find the expansion of(1, 2) in the...
 4.9.19: (a) Use matrix multiplication to find the compression of(a, b) in t...
 4.9.20: Based on Table 9, make a conjecture about the standard matrices for...
 4.9.21: Exercises 2122 Using Example 2 as a model, describe the matrix oper...
 4.9.22: Exercises 2122 Using Example 2 as a model, describe the matrix oper...
 4.9.23: In each part of Exercises 2324, the effect of some matrix operator ...
 4.9.24: In each part of Exercises 2324, the effect of some matrix operator ...
 4.9.25: In Exercises 2526, find the standard matrix for the orthogonal proj...
 4.9.26: In Exercises 2526, find the standard matrix for the orthogonal proj...
 4.9.27: In Exercises 2728, find the standard matrix for the reflection of R...
 4.9.28: In Exercises 2728, find the standard matrix for the reflection of R...
 4.9.29: For each reflection operator in Table 2 use the standard matrix to ...
 4.9.30: For each orthogonal projection operator in Table 4 use the standard...
 4.9.31: Find the standard matrix for the operator T : R3 R3 that (a) rotate...
 4.9.32: In each part of the accompanying figure, find the standard matrix f...
 4.9.33: Use Formula (3) to find the standard matrix for a rotation of 180 a...
 4.9.34: Use Formula (3) to find the standard matrix for a rotation of /2 ra...
 4.9.35: Use Formula (3) to derive the standard matrices for the rotations a...
 4.9.36: Show that the standard matrices listed in Tables 1 and 3 are specia...
 4.9.37: In a sentence, describe the geometric effect of multiplying a vecto...
 4.9.38: If multiplication by A rotates a vector x in the xyplane through a...
 4.9.39: Let x0 be a nonzero column vector in R2, and suppose that T : R2R2 ...
 4.9.40: In R3 the orthogonal projections onto the xaxis, yaxis, and zaxi...
Solutions for Chapter 4.9: Basic Matrix Transformations in R2 and R3
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version  11th Edition
ISBN: 9781118474228
Solutions for Chapter 4.9: Basic Matrix Transformations in R2 and R3
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Chapter 4.9: Basic Matrix Transformations in R2 and R3 includes 40 full stepbystep solutions. Since 40 problems in chapter 4.9: Basic Matrix Transformations in R2 and R3 have been answered, more than 16915 students have viewed full stepbystep solutions from this chapter. Elementary Linear Algebra, Binder Ready Version: Applications Version was written by and is associated to the ISBN: 9781118474228. This textbook survival guide was created for the textbook: Elementary Linear Algebra, Binder Ready Version: Applications Version, edition: 11.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Column space C (A) =
space of all combinations of the columns of A.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Row space C (AT) = all combinations of rows of A.
Column vectors by convention.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.

Unitary matrix UH = U T = UI.
Orthonormal columns (complex analog of Q).

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.