- Chapter 1.1: Introduction
- Chapter 1.2: Vector Spaces
- Chapter 1.3: Subspaces
- Chapter 1.4: Linear Combinations and Systems of Linear Equations
- Chapter 1.5: Linear Dependence and Linear Independence
- Chapter 1.6: Bases and Dimension
- Chapter 1.7: Maximal Linearly Independent Subsets
- Chapter 2.1: Linear Transformations, Null Spaces, and Ranges
- Chapter 2.2: The Matrix Representation of a Linear Transformation
- Chapter 2.3: Composition of Linear Transformations and Matrix Multiplication
- Chapter 2.4: Invertibility and Isomorphisms
- Chapter 2.5: The Change of Coordinate Matrix
- Chapter 2.6: Dual Spaces
- Chapter 2.7: Homogeneous Linear Differential Equations with Constant Coefficients
- Chapter 3.1: Elementary Matrix Operations and Elementary Matrices
- Chapter 3.2: The Rank of a Matrix and Matrix Inverses
- Chapter 3.3: Systems of Linear EquationsTheoretical Aspects
- Chapter 3.4: Systems of Linear EquationsComputational Aspects
- Chapter 4.1: Determinants of Order 2
- Chapter 4.2: Determinants of Order //
- Chapter 4.3: Properties of Determinants
- Chapter 4.4: SummaryImportant Facts about Determinants
- Chapter 4.5: A Characterization of the Determinant
- Chapter 5.1: Eigenvalues and Eigenvectors
- Chapter 5.2: Diagonalizability
- Chapter 5.3: Matrix Limits and Markov Chains
- Chapter 5.4: Invariant Subspaces and the Cayley-Hamilton Theorem
- Chapter 6.1: Inner Products and Norms
- Chapter 6.10:
- Chapter 6.11: The Geometry of Orthogonal Operators
- Chapter 6.2: Gram-Schmidt Orthogonalization Process
- Chapter 6.3: The Adjoint of a Linear Operator
- Chapter 6.4: Normal and Self-Adjoint Operators
- Chapter 6.5: Unitary and Orthogonal Operators and Their Matrices
- Chapter 6.6: Orthogonal Projections and the Spectral Theorem
- Chapter 6.7: The Singular Value Decomposition and the Pseudoinverse
- Chapter 6.8: Bilinear and Quadratic Forms
- Chapter 6.9: Einstein's Special Theory of Relativity
- Chapter 7.1: The Jordan Canonical Form I
- Chapter 7.2: The Jordan Canonical Form II
- Chapter 7.3: The Minimal Polynomial
- Chapter 7.4: The Rational Canonical Form
- Chapter `6.10:
Linear Algebra 4th Edition - Solutions by Chapter
Full solutions for Linear Algebra | 4th Edition
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.
Outer product uv T
= column times row = rank one matrix.
Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or -1) based on the number of row exchanges to reach I.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!