- Chapter 1.1: Vectors and Linear Combinations
- Chapter 1.2: Lengths and Dot Products
- Chapter 1.3: Matrices
- Chapter 10.1: Complex Numbers
- Chapter 10.2: Hermitian and Unitary Matrices
- Chapter 10.3: The Fast Fourier Transform
- Chapter 2.1: Solving Linear Equations
- Chapter 2.2: The Idea of Elimination
- Chapter 2.3: Elimination Using Matrices
- Chapter 2.4: Rules for Matrix Operations
- Chapter 2.5: Inverse Matrices
- Chapter 2.6: Elimination = Factorization: A = L U
- Chapter 2.7: Transposes and Permutations
- Chapter 3.1: Spaces of Vectors
- Chapter 3.2: The Nullspace of A: Solving Ax = 0
- Chapter 3.3: The Rank and the Row Reduced Form
- Chapter 3.4: The Complete Solution to Ax = b
- Chapter 3.5: Independence, Basis and Dimension
- Chapter 3.6: Dimensions of the Four Subspaces
- Chapter 4.1: Orthogonality of the Four Subspaces
- Chapter 4.2: Projections
- Chapter 4.3: Least Squares Approximations
- Chapter 4.4: Orthogonal Bases and Gram-Schmidt
- Chapter 5.1: The Properties of Determinants
- Chapter 5.2: Permutations and Cofactors
- Chapter 5.3: Cramer's Rule, Inverses, and Volumes
- Chapter 6.1: Introduction to Eigenvalues
- Chapter 6.2: Diagonalizing a Matrix
- Chapter 6.3: Applications to Differential Equations
- Chapter 6.4: Symmetric Matrices
- Chapter 6.5: Positive Definite Matrices
- Chapter 6.6: Similar Matrices
- Chapter 6.7: Singular Value Decomposition (SVD)
- Chapter 7.1: The Idea of a Linear Transformation
- Chapter 7.2: The Matrix of a Linear Transformation
- Chapter 7.3: Diagonalization and the Pseudoinverse
- Chapter 8.1: Matrices in Engineering
- Chapter 8.2: Graphs and Networks
- Chapter 8.3: Markov Matrices, Population, and Economics
- Chapter 8.4: Linear Programming
- Chapter 8.5: Fourier Series: Linear Algebra for Functions
- Chapter 8.6: Linear Algebra for Statistics and Probability
- Chapter 8.7: Computer Graphics
- Chapter 9.1: Gaussian Elimination in Practice
- Chapter 9.2: Norms and Condition Numbers
- Chapter 9.3: Iterative Methods and Preconditioners
Introduction to Linear Algebra 4th Edition - Solutions by Chapter
Full solutions for Introduction to Linear Algebra | 4th Edition
Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Remove row i and column j; multiply the determinant by (-I)i + j •
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
A directed graph that has constants Cl, ... , Cm associated with the edges.
Outer product uv T
= column times row = rank one matrix.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.
Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or email@example.com
Forgot password? Reset it here