 4.11.1: Use the method of Example 1 to find an equation for the image of th...
 4.11.2: Use the method of Example 1 to find an equation for the image of th...
 4.11.3: In Exercises 34, find an equation for the image of the line y = 2x ...
 4.11.4: In Exercises 34, find an equation for the image of the line y = 2x ...
 4.11.5: In Exercises 56, sketch the image of the unit square under multipli...
 4.11.6: In Exercises 56, sketch the image of the unit square under multipli...
 4.11.7: In each part of Exercises 78, find the standard matrix for a single...
 4.11.8: (a) Reflects about the yaxis, then expands by a factor of 5 in the...
 4.11.9: (a) A reflection about the xaxis and a compression in the xdirect...
 4.11.10: (a) A shear in the ydirection by a factor 1 4 and a shear in the y...
 4.11.11: In Exercises 1114, express the matrix as a product of elementary ma...
 4.11.12: In Exercises 1114, express the matrix as a product of elementary ma...
 4.11.13: In Exercises 1114, express the matrix as a product of elementary ma...
 4.11.14: In Exercises 1114, express the matrix as a product of elementary ma...
 4.11.15: In each part of Exercises 1516, describe, in words, the effect on t...
 4.11.16: In each part of Exercises 1516, describe, in words, the effect on t...
 4.11.17: (a) Show that multiplication by A = 3 1 6 2 maps each point in the ...
 4.11.18: Find the matrix for a shear in the xdirection that transforms the ...
 4.11.19: In accordance with part (c) of Theorem 4.11.1, show that multiplica...
 4.11.20: Draw a figure that shows the image of the triangle with vertices (0...
 4.11.21: (a) Draw a figure that shows the image of the triangle with vertice...
 4.11.22: Find the endpoints of the line segment that results when the line s...
 4.11.23: Draw a figure showing the italicized letter T that results when the...
 4.11.24: Can an invertible matrix operator on R2 map a square region into a ...
 4.11.25: Find the image of the triangle with vertices(0, 0), (1, 1), (2, 0) ...
 4.11.26: In R3 the shear in the xydirection by a factor k is the matrix tra...
 4.11.27: Prove part (a) of Theorem 4.11.1. [Hint: A line in the plane has an...
 4.11.28: Use the hint in Exercise 27 to prove parts (b) and (c) of Theorem 4...
Solutions for Chapter 4.11: Geometry of Matrix Operators on R2
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version  11th Edition
ISBN: 9781118474228
Solutions for Chapter 4.11: Geometry of Matrix Operators on R2
Get Full SolutionsElementary Linear Algebra, Binder Ready Version: Applications Version was written by and is associated to the ISBN: 9781118474228. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Elementary Linear Algebra, Binder Ready Version: Applications Version, edition: 11. Since 28 problems in chapter 4.11: Geometry of Matrix Operators on R2 have been answered, more than 15625 students have viewed full stepbystep solutions from this chapter. Chapter 4.11: Geometry of Matrix Operators on R2 includes 28 full stepbystep solutions.

Characteristic equation det(A  AI) = O.
The n roots are the eigenvalues of A.

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Column space C (A) =
space of all combinations of the columns of A.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.