- Chapter 1.1: Introduction
- Chapter 1.2: Vector Spaces
- Chapter 1.3: Subspaces
- Chapter 1.4: Linear Combinations and Systems of Linear Equations
- Chapter 1.5: Linear Dependence and Linear Independence
- Chapter 1.6: Bases and Dimension
- Chapter 1.7: Maximal Linearly Independent Subsets
- Chapter 2.1: Linear Transformations, Null Spaces, and Ranges
- Chapter 2.2: The Matrix Representation of a Linear Transformation
- Chapter 2.3: Composition of Linear Transformations and Matrix Multiplication
- Chapter 2.4: Invertibility and Isomorphisms
- Chapter 2.5: The Change of Coordinate Matrix
- Chapter 2.6: Dual Spaces
- Chapter 2.7: Homogeneous Linear Differential Equations with Constant Coefficients
- Chapter 3.1: Elementary Matrix Operations and Elementary Matrices
- Chapter 3.2: The Rank of a Matrix and Matrix Inverses
- Chapter 3.3: Systems of Linear EquationsTheoretical Aspects
- Chapter 3.4: Systems of Linear EquationsComputational Aspects
- Chapter 4.1: Determinants of Order 2
- Chapter 4.2: Determinants of Order //
- Chapter 4.3: Properties of Determinants
- Chapter 4.4: SummaryImportant Facts about Determinants
- Chapter 4.5: A Characterization of the Determinant
- Chapter 5.1: Eigenvalues and Eigenvectors
- Chapter 5.2: Diagonalizability
- Chapter 5.3: Matrix Limits and Markov Chains
- Chapter 5.4: Invariant Subspaces and the Cayley-Hamilton Theorem
- Chapter 6.1: Inner Products and Norms
- Chapter 6.10:
- Chapter 6.11: The Geometry of Orthogonal Operators
- Chapter 6.2: Gram-Schmidt Orthogonalization Process
- Chapter 6.3: The Adjoint of a Linear Operator
- Chapter 6.4: Normal and Self-Adjoint Operators
- Chapter 6.5: Unitary and Orthogonal Operators and Their Matrices
- Chapter 6.6: Orthogonal Projections and the Spectral Theorem
- Chapter 6.7: The Singular Value Decomposition and the Pseudoinverse
- Chapter 6.8: Bilinear and Quadratic Forms
- Chapter 6.9: Einstein's Special Theory of Relativity
- Chapter 7.1: The Jordan Canonical Form I
- Chapter 7.2: The Jordan Canonical Form II
- Chapter 7.3: The Minimal Polynomial
- Chapter 7.4: The Rational Canonical Form
- Chapter `6.10:
Linear Algebra 4th Edition - Solutions by Chapter
Full solutions for Linear Algebra | 4th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
Dimension of vector space
dim(V) = number of vectors in any basis for V.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
Free columns of A.
Columns without pivots; these are combinations of earlier columns.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
A symmetric matrix with eigenvalues of both signs (+ and - ).
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Solvable system Ax = b.
The right side b is in the column space of A.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·