 8.1.1: In Exercises 12, suppose that T is a mapping whose domain is the ve...
 8.1.2: In Exercises 12, suppose that T is a mapping whose domain is the ve...
 8.1.3: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.4: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.5: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.6: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.7: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.8: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.9: In Exercises 39, determine whether the mapping T is a linear transf...
 8.1.10: Let T : P2 P3 be the linear transformation defined by T(p(x)) = xp(...
 8.1.11: Let T : P2 P3 be the linear transformation in Exercise 10. Which of...
 8.1.12: Let V be any vector space, and let T : V V be defined by T(v) = 3v....
 8.1.13: In each part, use the given information to find the nullity of the ...
 8.1.14: In each part, use the given information to find the rank of the lin...
 8.1.15: Let T : M22 M22 be the dilation operator with factor k = 3. (a) Fin...
 8.1.16: Let T : P2 P2 be the contraction operator with factor k = 1/4. (a) ...
 8.1.17: Let T : P2 R3 be the evaluation transformation at the sequence of p...
 8.1.18: Let V be the subspace of C[0, 2] spanned by the vectors 1, sin x, a...
 8.1.19: Consider the basis S = {v1, v2} for R2, where v1 = (1, 1) and v2 = ...
 8.1.20: Consider the basis S = {v1, v2}for R2, where v1 = (2, 1) and v2 = (...
 8.1.21: Consider the basis S = {v1, v2, v3} for R3, where v1 = (1, 1, 1), v...
 8.1.22: Consider the basis S = {v1, v2, v3} for R3, where v1 = (1, 2, 1), v...
 8.1.23: . Let T : P3 P2 be the mapping defined by T(a0 + a1x + a2x2 + a3x3 ...
 8.1.24: Let T : P2 P2 be the mapping defined by T(a0 + a1x + a2x2 ) = 3a0 +...
 8.1.25: (a) (Calculus required) Let D: P3 P2 be the differentiation transfo...
 8.1.26: (Calculus required) Let V = C[a, b] be the vector space of continuo...
 8.1.27: (Calculus required) Let V be the vector space of realvalued functi...
 8.1.28: For a positive integer n > 1, let T : Mnn R be the linear transform...
 8.1.29: (a) Let T : V R3 be a linear transformation from a vector space V t...
 8.1.30: In each part, determine whether the mapping T : Pn Pn is linear. (a...
 8.1.31: Let v1, v2, and v3 be vectors in a vector space V, and let T : V R3...
 8.1.32: Let {v1, v2,..., vn} be a basis for a vector space V, and let T : V...
 8.1.33: Let {v1, v2,..., vn} be a basis for a vector space V, and let T : V...
 8.1.34: Prove: If {v1, v2,..., vn} is a basis for a vector space V and w1, ...
 8.1.TF: TF. In parts (a)(i) determine whether the statement is true or fals...
Solutions for Chapter 8.1: General Linear Transformations
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version  11th Edition
ISBN: 9781118474228
Solutions for Chapter 8.1: General Linear Transformations
Get Full SolutionsSince 35 problems in chapter 8.1: General Linear Transformations have been answered, more than 15006 students have viewed full stepbystep solutions from this chapter. Elementary Linear Algebra, Binder Ready Version: Applications Version was written by and is associated to the ISBN: 9781118474228. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 8.1: General Linear Transformations includes 35 full stepbystep solutions. This textbook survival guide was created for the textbook: Elementary Linear Algebra, Binder Ready Version: Applications Version, edition: 11.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Fibonacci numbers
0,1,1,2,3,5, ... satisfy Fn = Fnl + Fn 2 = (A7 A~)I()q A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Singular matrix A.
A square matrix that has no inverse: det(A) = o.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).