- Chapter 1.1: Introduction
- Chapter 1.2: Vector Spaces
- Chapter 1.3: Subspaces
- Chapter 1.4: Linear Combinations and Systems of Linear Equations
- Chapter 1.5: Linear Dependence and Linear Independence
- Chapter 1.6: Bases and Dimension
- Chapter 1.7: Maximal Linearly Independent Subsets
- Chapter 2.1: Linear Transformations, Null Spaces, and Ranges
- Chapter 2.2: The Matrix Representation of a Linear Transformation
- Chapter 2.3: Composition of Linear Transformations and Matrix Multiplication
- Chapter 2.4: Invertibility and Isomorphisms
- Chapter 2.5: The Change of Coordinate Matrix
- Chapter 2.6: Dual Spaces
- Chapter 2.7: Homogeneous Linear Differential Equations with Constant Coefficients
- Chapter 3.1: Elementary Matrix Operations and Elementary Matrices
- Chapter 3.2: The Rank of a Matrix and Matrix Inverses
- Chapter 3.3: Systems of Linear EquationsTheoretical Aspects
- Chapter 3.4: Systems of Linear EquationsComputational Aspects
- Chapter 4.1: Determinants of Order 2
- Chapter 4.2: Determinants of Order //
- Chapter 4.3: Properties of Determinants
- Chapter 4.4: SummaryImportant Facts about Determinants
- Chapter 4.5: A Characterization of the Determinant
- Chapter 5.1: Eigenvalues and Eigenvectors
- Chapter 5.2: Diagonalizability
- Chapter 5.3: Matrix Limits and Markov Chains
- Chapter 5.4: Invariant Subspaces and the Cayley-Hamilton Theorem
- Chapter 6.1: Inner Products and Norms
- Chapter 6.10:
- Chapter 6.11: The Geometry of Orthogonal Operators
- Chapter 6.2: Gram-Schmidt Orthogonalization Process
- Chapter 6.3: The Adjoint of a Linear Operator
- Chapter 6.4: Normal and Self-Adjoint Operators
- Chapter 6.5: Unitary and Orthogonal Operators and Their Matrices
- Chapter 6.6: Orthogonal Projections and the Spectral Theorem
- Chapter 6.7: The Singular Value Decomposition and the Pseudoinverse
- Chapter 6.8: Bilinear and Quadratic Forms
- Chapter 6.9: Einstein's Special Theory of Relativity
- Chapter 7.1: The Jordan Canonical Form I
- Chapter 7.2: The Jordan Canonical Form II
- Chapter 7.3: The Minimal Polynomial
- Chapter 7.4: The Rational Canonical Form
- Chapter `6.10:
Linear Algebra 4th Edition - Solutions by Chapter
Full solutions for Linear Algebra | 4th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Tv = Av + Vo = linear transformation plus shift.
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
A = CTC = (L.J]))(L.J]))T for positive definite A.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
A directed graph that has constants Cl, ... , Cm associated with the edges.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Every v in V is orthogonal to every w in W.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Solvable system Ax = b.
The right side b is in the column space of A.
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.