- Chapter 1.1: Vectors and Linear Combinations
- Chapter 1.2: Lengths and Dot Products
- Chapter 1.3: Matrices
- Chapter 10.1: Complex Numbers
- Chapter 10.2: Hermitian and Unitary Matrices
- Chapter 10.3: The Fast Fourier Transform
- Chapter 2.1: Solving Linear Equations
- Chapter 2.2: The Idea of Elimination
- Chapter 2.3: Elimination Using Matrices
- Chapter 2.4: Rules for Matrix Operations
- Chapter 2.5: Inverse Matrices
- Chapter 2.6: Elimination = Factorization: A = L U
- Chapter 2.7: Transposes and Permutations
- Chapter 3.1: Spaces of Vectors
- Chapter 3.2: The Nullspace of A: Solving Ax = 0
- Chapter 3.3: The Rank and the Row Reduced Form
- Chapter 3.4: The Complete Solution to Ax = b
- Chapter 3.5: Independence, Basis and Dimension
- Chapter 3.6: Dimensions of the Four Subspaces
- Chapter 4.1: Orthogonality of the Four Subspaces
- Chapter 4.2: Projections
- Chapter 4.3: Least Squares Approximations
- Chapter 4.4: Orthogonal Bases and Gram-Schmidt
- Chapter 5.1: The Properties of Determinants
- Chapter 5.2: Permutations and Cofactors
- Chapter 5.3: Cramer's Rule, Inverses, and Volumes
- Chapter 6.1: Introduction to Eigenvalues
- Chapter 6.2: Diagonalizing a Matrix
- Chapter 6.3: Applications to Differential Equations
- Chapter 6.4: Symmetric Matrices
- Chapter 6.5: Positive Definite Matrices
- Chapter 6.6: Similar Matrices
- Chapter 6.7: Singular Value Decomposition (SVD)
- Chapter 7.1: The Idea of a Linear Transformation
- Chapter 7.2: The Matrix of a Linear Transformation
- Chapter 7.3: Diagonalization and the Pseudoinverse
- Chapter 8.1: Matrices in Engineering
- Chapter 8.2: Graphs and Networks
- Chapter 8.3: Markov Matrices, Population, and Economics
- Chapter 8.4: Linear Programming
- Chapter 8.5: Fourier Series: Linear Algebra for Functions
- Chapter 8.6: Linear Algebra for Statistics and Probability
- Chapter 8.7: Computer Graphics
- Chapter 9.1: Gaussian Elimination in Practice
- Chapter 9.2: Norms and Condition Numbers
- Chapter 9.3: Iterative Methods and Preconditioners
Introduction to Linear Algebra 4th Edition - Solutions by Chapter
Full solutions for Introduction to Linear Algebra | 4th Edition
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
Remove row i and column j; multiply the determinant by (-I)i + j •
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Dimension of vector space
dim(V) = number of vectors in any basis for V.
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
A symmetric matrix with eigenvalues of both signs (+ and - ).
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
= Xl (column 1) + ... + xn(column n) = combination of columns.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here