- Chapter 1.A: Rn and Cn
- Chapter 1.B: Definition of Vector Space
- Chapter 1.C: Subspaces
- Chapter 10.A: Trace
- Chapter 10.B: Determinant
- Chapter 2.A: Span and Linear Independence
- Chapter 2.B: Bases
- Chapter 2.C: Dimension
- Chapter 3.A: The Vector Space of Linear Maps
- Chapter 3.B: Null Spaces and Ranges
- Chapter 3.C: Matrices
- Chapter 3.D: Invertibility and Isomorphic Vector Spaces
- Chapter 3.E: Products and Quotients of Vector Spaces
- Chapter 3.F: Duality
- Chapter 4: Polynomials
- Chapter 5.A: Invariant Subspaces
- Chapter 5.B: Eigenvectors and Upper-Triangular Matrices
- Chapter 5.C: Eigenspaces and Diagonal Matrices
- Chapter 6.A: Inner Products and Norms
- Chapter 6.B: Orthonormal Bases
- Chapter 6.C: Orthogonal Complements and Minimization Problems
- Chapter 7.A: Self-Adjoint and Normal Operators
- Chapter 7.B: The Spectral Theorem
- Chapter 7.C: Positive Operators and Isometries
- Chapter 7.D: Polar Decomposition and Singular Value Decomposition
- Chapter 8.A: Generalized Eigenvectors and Nilpotent Operators
- Chapter 8.B: Decomposition of an Operator
- Chapter 8.C: Characteristic and Minimal Polynomials
- Chapter 8.D: Jordan Form
- Chapter 9.A: Complexification
- Chapter 9.B: Operators on Real Inner Product Spaces
Linear Algebra Done Right (Undergraduate Texts in Mathematics) 3rd Edition - Solutions by Chapter
Full solutions for Linear Algebra Done Right (Undergraduate Texts in Mathematics) | 3rd Edition
Linear Algebra Done Right (Undergraduate Texts in Mathematics) | 3rd Edition - Solutions by ChapterGet Full Solutions
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
Special solutions to As = O.
One free variable is Si = 1, other free variables = o.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here