- 5.2.1: Label the following statements as true or false. (a) Any linear ope...
- 5.2.2: For each of the following matrices A G Mnxn(R), test A for diagonal...
- 5.2.3: For each of the following linear operators T on a vector space V, t...
- 5.2.4: Prove the matrix version of the corollary to Theorem 5.5: If A G Mr...
- 5.2.5: State and prove the matrix version of Theorem 5.6.
- 5.2.6: (a) .Justify the test for diagonalizability and the method for diag...
- 5.2.7: For find an expression for A", where n is an arbitrary positive int...
- 5.2.8: Suppose that A G MnXn (F) has two distinct eigenvalues, Ai and A2, ...
- 5.2.9: Let T be a lineal' operator on a finite-dimensional vector space V,...
- 5.2.10: Let T be a linear operator on a finite-dimensional vector space V w...
- 5.2.11: Let A be an n x n matrix that is similar to an upper triangular mat...
- 5.2.12: Let T be an invertible linear operator on a finite-dimensional vect...
- 5.2.13: Let A G Mnxn (F). Recall from Exercise 14 of Section 5.1 that A and...
- 5.2.14: Find the general solution to each system of differential equations....
- 5.2.15: Let = x + y = 3x - y (b) x'i = 8x1 + 10x2 x'2 = 5xi - 7x2 x1 = Xi x...
- 5.2.16: Let C G Mmxn (/?), and let Y be an n x p matrix of differentiable f...
- 5.2.17: Two linear operators T and U on a finite-dimensional vector space V...
- 5.2.18: Two linear operators T and U on a finite-dimensional vector space V...
- 5.2.19: Let T be a diagonalizable linear operator on a finite-dimensional v...
- 5.2.20: Let Wi, W2 ,..., Wfc be subspaces of a finite-dimensional vector sp...
- 5.2.21: Let V be a finite-dimensional vector space with a basis 0, and let ...
- 5.2.22: Let T be a linear operator on a finite-dimensional vector space V, ...
- 5.2.23: Let Wi, W2, Ki, K2 ,..., Kp, Mj, M2 ,..., M9 be subspaces of a vect...
Solutions for Chapter 5.2: Diagonalizability
Full solutions for Linear Algebra | 4th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
peA) = det(A - AI) has peA) = zero matrix.
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Free columns of A.
Columns without pivots; these are combinations of earlier columns.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n - 1)/2 edges between nodes. A tree has only n - 1 edges and no closed loops.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
A symmetric matrix with eigenvalues of both signs (+ and - ).
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
Every v in V is orthogonal to every w in W.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.