- Chapter 1: Systems of Linear Equations
- Chapter 1-3: Cumulative Test
- Chapter 1.1: Introduction to Systems of Linear Equations
- Chapter 1.2: Gaussian Elimination and Gauss-Jordan Elimination
- Chapter 1.3: Applications of Systems of Linear Equations
- Chapter 2: Matrices
- Chapter 2.1: Operations with Matrices
- Chapter 2.2: Properties of Matrix Operations
- Chapter 2.3: The Inverse of a Matrix
- Chapter 2.4: Elementary Matrices
- Chapter 2.5: Markov Chains
- Chapter 2.6: More Applications of Matrix Operations
- Chapter 3: Determinants
- Chapter 3.1: The Determinant of a Matrix
- Chapter 3.2: Determinants and Elementary Operations
- Chapter 3.3: Properties of Determinants
- Chapter 3.4: Applications of Determinants
- Chapter 4: Vector Spaces
- Chapter 4-5: Cumulative Test
- Chapter 4.1: Vectors in Rn
- Chapter 4.2: Vector Spaces
- Chapter 4.3: Subspaces of Vector Spaces
- Chapter 4.4: Spanning Sets and Linear Independence
- Chapter 4.5: Basis and Dimension
- Chapter 4.6: Rank of a Matrix and Systems of Linear Equations
- Chapter 4.7: Coordinates and Change of Basis
- Chapter 4.8: Applications of Vector Spaces
- Chapter 5: Inner Product Spaces
- Chapter 5.1: Length and Dot Product in Rn
- Chapter 5.2: Inner Product Spaces
- Chapter 5.3: Orthonormal Bases: Gram-Schmidt Process
- Chapter 5.4: Mathematical Models and Least Squares Analysis
- Chapter 5.5: Applications of Inner Product Spaces
- Chapter 6: Linear Transformations
- Chapter 6-7: Cumulative Test
- Chapter 6.1: Introduction to Linear Transformations
- Chapter 6.2: The Kernel and Range of a Linear Transformation
- Chapter 6.3: Matrices for Linear Transformations
- Chapter 6.4: Transition Matrices and Similarity
- Chapter 6.5: Applications of Linear Transformations
- Chapter 7: Eigenvalues and Eigenvectors
- Chapter 7.1: Eigenvalues and Eigenvectors
- Chapter 7.2: Diagonalization
- Chapter 7.3: Symmetric Matrices and Orthogonal Diagonalization
- Chapter 7.4: Applications of Eigenvalues and Eigenvectors
Elementary Linear Algebra 8th Edition - Solutions by Chapter
Full solutions for Elementary Linear Algebra | 8th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Constant down each diagonal = time-invariant (shift-invariant) filter.