 7.1.1: Label the following statements as true or false. (a) Eigenvectors o...
 7.1.2: For each matrix A, find a basis for each generalized eigenspace of ...
 7.1.3: For each linear operator T, find a basis for each generalized eigen...
 7.1.4: For each linear operator T, find a basis for each generalized eigen...
 7.1.5: Let 71,72, ,7P be cycles of generalized eigenvectors of a linear op...
 7.1.6: Let T: V Wbe a linear transformation. Prove the following results....
 7.1.7: Let U be a linear operator on a finitedimensional vector space V. ...
 7.1.8: Use Theorem 7.4 to prove that the vectors v\, v2,..., vk in the sta...
 7.1.9: Let T be a linear operator on a finitedimensional vector space V w...
 7.1.10: Let T be a linear operator on a finitedimensional vector space who...
 7.1.11: Prove Corollary 2 to Theorem 7.7.
 7.1.12: Prove Theorem 7.8.
 7.1.13: Let T be a linear operator on a finitedimensional vector space V s...
Solutions for Chapter 7.1: The Jordan Canonical Form I
Full solutions for Linear Algebra  4th Edition
ISBN: 9780130084514
Solutions for Chapter 7.1: The Jordan Canonical Form I
Get Full SolutionsLinear Algebra was written by and is associated to the ISBN: 9780130084514. This textbook survival guide was created for the textbook: Linear Algebra , edition: 4. This expansive textbook survival guide covers the following chapters and their solutions. Since 13 problems in chapter 7.1: The Jordan Canonical Form I have been answered, more than 12146 students have viewed full stepbystep solutions from this chapter. Chapter 7.1: The Jordan Canonical Form I includes 13 full stepbystep solutions.

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)ยท(b  Ax) = o.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.