- Chapter 1.1: Vectors and Linear Combinations
- Chapter 1.2: Lengths and Dot Products
- Chapter 1.3: Matrices
- Chapter 10.1: Complex Numbers
- Chapter 10.2: Hermitian and Unitary Matrices
- Chapter 10.3: The Fast Fourier Transform
- Chapter 2.1: Solving Linear Equations
- Chapter 2.2: The Idea of Elimination
- Chapter 2.3: Elimination Using Matrices
- Chapter 2.4: Rules for Matrix Operations
- Chapter 2.5: Inverse Matrices
- Chapter 2.6: Elimination = Factorization: A = L U
- Chapter 2.7: Transposes and Permutations
- Chapter 3.1: Spaces of Vectors
- Chapter 3.2: The Nullspace of A: Solving Ax = 0
- Chapter 3.3: The Rank and the Row Reduced Form
- Chapter 3.4: The Complete Solution to Ax = b
- Chapter 3.5: Independence, Basis and Dimension
- Chapter 3.6: Dimensions of the Four Subspaces
- Chapter 4.1: Orthogonality of the Four Subspaces
- Chapter 4.2: Projections
- Chapter 4.3: Least Squares Approximations
- Chapter 4.4: Orthogonal Bases and Gram-Schmidt
- Chapter 5.1: The Properties of Determinants
- Chapter 5.2: Permutations and Cofactors
- Chapter 5.3: Cramer's Rule, Inverses, and Volumes
- Chapter 6.1: Introduction to Eigenvalues
- Chapter 6.2: Diagonalizing a Matrix
- Chapter 6.3: Applications to Differential Equations
- Chapter 6.4: Symmetric Matrices
- Chapter 6.5: Positive Definite Matrices
- Chapter 6.6: Similar Matrices
- Chapter 6.7: Singular Value Decomposition (SVD)
- Chapter 7.1: The Idea of a Linear Transformation
- Chapter 7.2: The Matrix of a Linear Transformation
- Chapter 7.3: Diagonalization and the Pseudoinverse
- Chapter 8.1: Matrices in Engineering
- Chapter 8.2: Graphs and Networks
- Chapter 8.3: Markov Matrices, Population, and Economics
- Chapter 8.4: Linear Programming
- Chapter 8.5: Fourier Series: Linear Algebra for Functions
- Chapter 8.6: Linear Algebra for Statistics and Probability
- Chapter 8.7: Computer Graphics
- Chapter 9.1: Gaussian Elimination in Practice
- Chapter 9.2: Norms and Condition Numbers
- Chapter 9.3: Iterative Methods and Preconditioners
Introduction to Linear Algebra 4th Edition - Solutions by Chapter
Full solutions for Introduction to Linear Algebra | 4th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n - 1)/2 edges between nodes. A tree has only n - 1 edges and no closed loops.
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Outer product uv T
= column times row = rank one matrix.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.
Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.