- Chapter 1: Systems of Linear Equations
- Chapter 1-3: Cumulative Test
- Chapter 1.1: Introduction to Systems of Linear Equations
- Chapter 1.2: Gaussian Elimination and Gauss-Jordan Elimination
- Chapter 1.3: Applications of Systems of Linear Equations
- Chapter 2: Matrices
- Chapter 2.1: Operations with Matrices
- Chapter 2.2: Properties of Matrix Operations
- Chapter 2.3: The Inverse of a Matrix
- Chapter 2.4: Elementary Matrices
- Chapter 2.5: Markov Chains
- Chapter 2.6: More Applications of Matrix Operations
- Chapter 3: Determinants
- Chapter 3.1: The Determinant of a Matrix
- Chapter 3.2: Determinants and Elementary Operations
- Chapter 3.3: Properties of Determinants
- Chapter 3.4: Applications of Determinants
- Chapter 4: Vector Spaces
- Chapter 4-5: Cumulative Test
- Chapter 4.1: Vectors in Rn
- Chapter 4.2: Vector Spaces
- Chapter 4.3: Subspaces of Vector Spaces
- Chapter 4.4: Spanning Sets and Linear Independence
- Chapter 4.5: Basis and Dimension
- Chapter 4.6: Rank of a Matrix and Systems of Linear Equations
- Chapter 4.7: Coordinates and Change of Basis
- Chapter 4.8: Applications of Vector Spaces
- Chapter 5: Inner Product Spaces
- Chapter 5.1: Length and Dot Product in Rn
- Chapter 5.2: Inner Product Spaces
- Chapter 5.3: Orthonormal Bases: Gram-Schmidt Process
- Chapter 5.4: Mathematical Models and Least Squares Analysis
- Chapter 5.5: Applications of Inner Product Spaces
- Chapter 6: Linear Transformations
- Chapter 6-7: Cumulative Test
- Chapter 6.1: Introduction to Linear Transformations
- Chapter 6.2: The Kernel and Range of a Linear Transformation
- Chapter 6.3: Matrices for Linear Transformations
- Chapter 6.4: Transition Matrices and Similarity
- Chapter 6.5: Applications of Linear Transformations
- Chapter 7: Eigenvalues and Eigenvectors
- Chapter 7.1: Eigenvalues and Eigenvectors
- Chapter 7.2: Diagonalization
- Chapter 7.3: Symmetric Matrices and Orthogonal Diagonalization
- Chapter 7.4: Applications of Eigenvalues and Eigenvectors
Elementary Linear Algebra 8th Edition - Solutions by Chapter
Full solutions for Elementary Linear Algebra | 8th Edition
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
peA) = det(A - AI) has peA) = zero matrix.
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
Invert A by row operations on [A I] to reach [I A-I].
Set of n nodes connected pairwise by m edges. A complete graph has all n(n - 1)/2 edges between nodes. A tree has only n - 1 edges and no closed loops.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.
Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here