×

×

# Solutions for Chapter 2.2: MATRICES AND LINEAR TRANSFORMATIONS

## Full solutions for Elementary Linear Algebra: A Matrix Approach | 2nd Edition

ISBN: 9780131871410

Solutions for Chapter 2.2: MATRICES AND LINEAR TRANSFORMATIONS

Solutions for Chapter 2.2
4 5 0 292 Reviews
29
1
##### ISBN: 9780131871410

Elementary Linear Algebra: A Matrix Approach was written by and is associated to the ISBN: 9780131871410. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 2.2: MATRICES AND LINEAR TRANSFORMATIONS includes 26 full step-by-step solutions. This textbook survival guide was created for the textbook: Elementary Linear Algebra: A Matrix Approach, edition: 2. Since 26 problems in chapter 2.2: MATRICES AND LINEAR TRANSFORMATIONS have been answered, more than 106116 students have viewed full step-by-step solutions from this chapter.

Key Math Terms and definitions covered in this textbook
• Cayley-Hamilton Theorem.

peA) = det(A - AI) has peA) = zero matrix.

• Circulant matrix C.

Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.

• Condition number

cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

• Cramer's Rule for Ax = b.

B j has b replacing column j of A; x j = det B j I det A

• Fast Fourier Transform (FFT).

A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.

• Fourier matrix F.

Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

• Hessenberg matrix H.

Triangular matrix with one extra nonzero adjacent diagonal.

• Independent vectors VI, .. " vk.

No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

• Iterative method.

A sequence of steps intended to approach the desired solution.

• Normal matrix.

If N NT = NT N, then N has orthonormal (complex) eigenvectors.

• Orthogonal subspaces.

Every v in V is orthogonal to every w in W.

• Orthonormal vectors q 1 , ... , q n·

Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

• Outer product uv T

= column times row = rank one matrix.

• Particular solution x p.

Any solution to Ax = b; often x p has free variables = o.

• Plane (or hyperplane) in Rn.

Vectors x with aT x = O. Plane is perpendicular to a =1= O.

• Positive definite matrix A.

Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

• Rank one matrix A = uvT f=. O.

Column and row spaces = lines cu and cv.

• Special solutions to As = O.

One free variable is Si = 1, other free variables = o.

• Unitary matrix UH = U T = U-I.

Orthonormal columns (complex analog of Q).

• Vandermonde matrix V.

V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.