 6.3.1: Let L : R2 _ R2 be defined by Let S be the natural basis for R2 and...
 6.3.2: Let L : ~ ;. RJ be defined by Let 5 and T be the natural bases for...
 6.3.3: Let L : ~ __ RJ be defined by o 1  2 1 2 iJ [:: ]. '" Let 5 and T ...
 6.3.4: Let L : R2 ;. R2 be the linear transformation rotating R2 counterc...
 6.3.5: Let L: R3 ;. R 3 be defined by (a ) Find the representation of L ...
 6.3.6: Let L : R 3 ;. R3 be defined as in Exereise 5. Let T = (L(el). L ...
 6.3.7: Let L: R 3 ....)0 R3 be the linear transformation repre,ented by t...
 6.3.8: Let L : M n ...... M22 be defined by for A in Mn. Consider the orde...
 6.3.9: Let V be the vector space with basis S={I.f.e' .le'} andletL: V_ Vb...
 6.3.10: Let L : P I ;. 1'2 be tX":fi ned by L (p (f )) = Ip(t ) + prO). C...
 6.3.11: Let A = 3 I '] 4 . an
 6.3.12: Let L: V ....... V be a linear operator. A nonempty subspace U of V...
 6.3.13: Let L ' ~ ;, R' be defined by a reflection about the xaxis. Consi...
 6.3.14: If L: R3 ;. R2 is the linear transfonnation whose representation ...
 6.3.15: If 0: V + W is the zero linear transformation. show that the matri...
 6.3.16: If I : V ...... V is the identity linear operator on V defined by '...
 6.3.17: Let I : R2 ...... R2 be the identity linear operator on R2. Let S (...
 6.3.18: Let V be the vector space of realvalued continuous functions with b...
 6.3.19: Let V be the vector space of realvalued continuous func  lions wi...
 6.3.20: Let V be the vector space of realvalued continuous functions with o...
 6.3.21: Let L : V + V be a linear operator defined by Liv) = c v. where c ...
 6.3.22: Let the representation of L : Rl ) R2 with respect to the ordered...
 6.3.23: Let I : V + V b~ the identity operator on an /1  dimensional vecl...
 6.3.31: Prove Corollary 6.2.
Solutions for Chapter 6.3: Matrix of a linear Transformation
Full solutions for Elementary Linear Algebra with Applications  9th Edition
ISBN: 9780471669593
Solutions for Chapter 6.3: Matrix of a linear Transformation
Get Full SolutionsChapter 6.3: Matrix of a linear Transformation includes 24 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Elementary Linear Algebra with Applications, edition: 9. Elementary Linear Algebra with Applications was written by and is associated to the ISBN: 9780471669593. Since 24 problems in chapter 6.3: Matrix of a linear Transformation have been answered, more than 10061 students have viewed full stepbystep solutions from this chapter.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Stiffness matrix
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.