- Chapter 1: Linear Equations and Matrices
- Chapter 1.1: Systems of Linear Equations
- Chapter 1.2: Matrices
- Chapter 1.3: Matrix Multiplication
- Chapter 1.4: Algebraic Properties of Matrix Operations
- Chapter 1.5: Special Types of Matrices and Partitioned Matrices
- Chapter 1.6: Matrix Transformations
- Chapter 1.7: Computer Graphics (Optional)
- Chapter 1.8: Correlation Coefficient (Optional)
- Chapter 2: Solving linear Systems
- Chapter 2.1: Echelon Form of a Matrix
- Chapter 2.2: Solving Lint!ar Systt!ms
- Chapter 2.3: Elementary Matrices; Finding A -
- Chapter 2.4: Equivalent Matrices
- Chapter 2.5: LV-Factorization (Optional)
- Chapter 3: Determinants
- Chapter 3.1: Definition
- Chapter 3.2: Properties of Determinants
- Chapter 3.3: Cofactor Expansion
- Chapter 3.4: Inverse of a Matrix
- Chapter 3.5: Other Applications of Determinants
- Chapter 3.6: Determinants from a Computational Point of View
- Chapter 4: Real Vector Spaces
- Chapter 4.1: Vectors in the Plane and in 3-Space
- Chapter 4.2: Vector Spaces
- Chapter 4.3: Subspaces
- Chapter 4.4: Span
- Chapter 4.5: Linear Independence
- Chapter 4.6: Basis and Dimension
- Chapter 4.7: Homogeneous Systems
- Chapter 4.8: Coordinates and Isomorphisms
- Chapter 4.9: Rank of a Matrix
- Chapter 5: Inner Product Spaces
- Chapter 5.1: Length and Direction in R2 and R3
- Chapter 5.2: Cross Product in RJ (Optional)
- Chapter 5.3: Inner Product Spaces
- Chapter 5.4: Gram*-Schmidtt Process
- Chapter 5.5: Orthogonal Complements
- Chapter 5.6: Least Squares (Optional)
- Chapter 6: Li near Transformations and Matrices
- Chapter 6.1: Definition and Examples
- Chapter 6.2: Kernel and Range of a Linear Transformation
- Chapter 6.3: Matrix of a Linear Transformation
- Chapter 6.4: Vector Space of Matrices and Vector Space of Linear Transformations (Optional)
- Chapter 6.5: Similarity
- Chapter 6.6: Introduction to Homogeneous Coordinates (Optional)
- Chapter 7: Eigenvalues and Eigenvectors
- Chapter 7.1: Eigenvalues and Eigenvectors
- Chapter 7.2: Diagonalization and Similar Matrices
- Chapter 7.3: Diagonalization of Symmetric Matrices
- Chapter 8.1: Stable Age Distribution in a Population; Markov Processes
- Chapter 8.2: Spectral Decomposition and Singular Value Decomposition
- Chapter 8.3: Dominant Eigenvalue and Principal Component Analysis
- Chapter 8.4: Differential Equations
- Chapter 8.6: Real Quadratic Forms
- Chapter 8.7: Conic Sections
- Chapter 8.8: Quadric Surfaces
Elementary Linear Algebra with Applications 9th Edition - Solutions by Chapter
Full solutions for Elementary Linear Algebra with Applications | 9th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
A = CTC = (L.J]))(L.J]))T for positive definite A.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Dimension of vector space
dim(V) = number of vectors in any basis for V.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Invert A by row operations on [A I] to reach [I A-I].
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.