 6.1.1: The example at the start of the chapter has powers of this matrix A...
 6.1.2: Find the eigenvalues and the eigenvectors of these two matrices: A ...
 6.1.3: Compute the eigenvalues and eigenvectors of A and AI. Check the tr...
 6.1.4: Compute the eigenvalues and eigenvectors of A and A2: A _ [1 3]  ...
 6.1.5: Compute the eigenvalues and eigenvectors of A and A2: A _ [1 3]  ...
 6.1.6: Find the eigenvalues of A and B and AB and BA: A = [~ ~] and B = [6...
 6.1.7: Elimination produces A = LV. The eigenvalues of V are on its diagon...
 6.1.8: (a) If you know that x is an eigenvector, the way to find A is to _...
 6.1.9: What do you do to the equation Ax = AX, in order to prove (a), (b),...
 6.1.10: Find the eigenvalues and eigenvectors for both of these Markov matr...
 6.1.11: Here is a strange fact about 2 by 2 matrices with eigenvalues A1 =1...
 6.1.12: Find three eigenvectors for this matrix P (projection matrices have...
 6.1.13: From the unit veptor u = (k, k, ~, ~) construct the rank one projec...
 6.1.14: Solve det(Q  AI) = 0 by the quadratic formula to reach A = cos () ...
 6.1.15: Every permutation matrix leaves x = (1,1, ... ,1) unchanged. Then A...
 6.1.16: The determinant of A equals the product A lA2 ... An. Start with th...
 6.1.17: The sum of the diagonal entries (the trace) equals the sum of the e...
 6.1.18: If A has Al = 4 and A2 = 5 then det(A  A/) = (A  4)(A  5) = A 2 ...
 6.1.19: A 3 by 3 matrix B is known to have eigenvalues 0, 1,2. This informa...
 6.1.20: Choose the last rows of A and C to give eigenvalues 4, 7 and 1, 2, ...
 6.1.21: The eigenvalues of A equal the eigenvalues of AT. This is because d...
 6.1.22: Construct any 3 by 3 Markov matrix M: positive entries down each co...
 6.1.23: Find three 2 by 2 matrices that have Al = A2 = O. The trace is zero...
 6.1.24: Suppose A and B have the same eigenvalues AI, ... , An with the sam...
 6.1.25: Suppose A and B have the same eigenvalues AI, ... , An with the sam...
 6.1.26: The block B has eigenvalues 1, 2 and C has eigenvalues 3,4 and D ha...
 6.1.27: Find the rank and the four eigenvalues of A and C: 1 1 1 1 1 0 A= 1...
 6.1.28: Subtract / from the previous A. Find the A's and then the determina...
 6.1.29: Subtract / from the previous A. Find the A's and then the determina...
 6.1.30: When a + b = C + d show that (1, 1) is an eigenvector and find both...
 6.1.31: When a + b = C + d show that (1, 1) is an eigenvector and find both...
 6.1.32: Suppose A has eigenvalues 0, 3, 5 with independent eigenvectors u, ...
 6.1.33: Suppose u, v are orthonormal vectors in R 2 , and A = UV T. Compute...
 6.1.34: Find the eigenvalues of this permutation matrix P from det (P  AI)...
 6.1.35: There are six 3 by 3 permutation matrices P. What numbers can be th...
 6.1.36: Is there a real 2 by 2 matrix (other than I) with A 3 = I? Its eige...
 6.1.37: (a) Find the eigenvalues and eigenvectors of A. They depend on c: (...
Solutions for Chapter 6.1: Introduction to Eigenvalues
Full solutions for Introduction to Linear Algebra  4th Edition
ISBN: 9780980232714
Solutions for Chapter 6.1: Introduction to Eigenvalues
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Chapter 6.1: Introduction to Eigenvalues includes 37 full stepbystep solutions. Since 37 problems in chapter 6.1: Introduction to Eigenvalues have been answered, more than 10884 students have viewed full stepbystep solutions from this chapter. Introduction to Linear Algebra was written by and is associated to the ISBN: 9780980232714. This textbook survival guide was created for the textbook: Introduction to Linear Algebra, edition: 4.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).