 6.3.1: Consider T : R2 R4 defined by T (x) = Ax, where A = 1 2 2 4 4 8 8 1...
 6.3.2: Consider T : R3 R2 defined by T (x) = Ax, where A = 1 1 2 1 2 3 . F...
 6.3.3: For 37, find Ker(T ) and Rng(T ), and give a geometrical descriptio...
 6.3.4: For 37, find Ker(T ) and Rng(T ), and give a geometrical descriptio...
 6.3.5: For 37, find Ker(T ) and Rng(T ), and give a geometrical descriptio...
 6.3.6: For 37, find Ker(T ) and Rng(T ), and give a geometrical descriptio...
 6.3.7: For 37, find Ker(T ) and Rng(T ), and give a geometrical descriptio...
 6.3.8: For 811, compute Ker(T ) and Rng(T ).The linear transformation T de...
 6.3.9: For 811, compute Ker(T ) and Rng(T ).The linear transformation T de...
 6.3.10: For 811, compute Ker(T ) and Rng(T ).
 6.3.11: For 811, compute Ker(T ) and Rng(T ).The linear transformation T de...
 6.3.12: Consider the linear transformation T : R3 R defined by T (v) = u, v...
 6.3.13: Consider the linear transformation S : Mn(R) Mn(R) defined by S(A) ...
 6.3.14: Consider the linear transformation T : Mn(R) Mn(R) defined by T (A)...
 6.3.15: Consider the linear transformation T : P2(R) P2(R) defined by T (ax...
 6.3.16: Consider the linear transformation T : P2(R) P1(R) defined by T (ax...
 6.3.17: Consider the linear transformation T : P1(R) P2(R) defined by T (ax...
 6.3.18: Consider the linear transformation T : M2(R) P2(R) defined by T a b...
 6.3.19: Consider the linear transformation T : R2 M23(R) defined by T (x, y...
 6.3.20: Consider the linear transformation T : M24(R) M42(R) defined by T (...
 6.3.21: Consider the linear transformation T : M24(R) M42(R) defined by T (...
 6.3.22: (a) Let T : V W be a linear transformation, and suppose that dim[V]...
 6.3.23: Let T : V W and S : V W be linear transformations, and assume that ...
 6.3.24: Let V be a vector space with basis{v1, v2,..., vk } and suppose T :...
Solutions for Chapter 6.3: The Kernel and Range of a Linear Transformation
Full solutions for Differential Equations  4th Edition
ISBN: 9780321964670
Solutions for Chapter 6.3: The Kernel and Range of a Linear Transformation
Get Full SolutionsChapter 6.3: The Kernel and Range of a Linear Transformation includes 24 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Differential Equations was written by and is associated to the ISBN: 9780321964670. Since 24 problems in chapter 6.3: The Kernel and Range of a Linear Transformation have been answered, more than 20098 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Differential Equations, edition: 4.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Column space C (A) =
space of all combinations of the columns of A.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.