- Chapter 1: Matrices and Systems of Equations
- Chapter 1.1: Systems of Linear Equations
- Chapter 1.2: Row Echelon Form
- Chapter 1.3: Matrix Arithmetic
- Chapter 1.4: Matrix Algebra
- Chapter 1.5: Elementary Matrices
- Chapter 1.6: Partitioned Matrices
- Chapter 2: Determinants
- Chapter 2.1: The Determinant of a Matrix
- Chapter 2.2: Properties of Determinants
- Chapter 2.3: Additional Topics and Applications
- Chapter 3: Vector Spaces
- Chapter 3.1: Definition and Examples
- Chapter 3.2: Subspaces
- Chapter 3.3: Linear Independence
- Chapter 3.4: Basis and Dimension
- Chapter 3.5: Change of Basis
- Chapter 3.6: Row Space and Column Space
- Chapter 4: Linear Transformations
- Chapter 4.1: Definition and Examples
- Chapter 4.2: Matrix Representations of Linear Transformations
- Chapter 4.3: Similarity
- Chapter 5: Orthogonality
- Chapter 5.1: The Scalar Product in Rn
- Chapter 5.2: Orthogonal Subspaces
- Chapter 5.3: Least Squares Problems
- Chapter 5.4: Inner Product Spaces
- Chapter 5.5: Orthonormal Sets
- Chapter 5.6: The GramSchmidt Orthogonalization Process
- Chapter 5.7: Orthogonal Polynomials
- Chapter 6: Eigenvalues
- Chapter 6.1: Eigenvalues and Eigenvectors
- Chapter 6.2: Systems of Linear Differential Equations
- Chapter 6.3: Diagonalization
- Chapter 6.4: Hermitian Matrices
- Chapter 6.5: The Singular Value Decomposition
- Chapter 6.6: Quadratic Forms
- Chapter 6.7: Positive Definite Matrices
- Chapter 6.8: Nonnegative Matrices
- Chapter 7: Numerical Linear Algebra
- Chapter 7.1: Floating-Point Numbers
- Chapter 7.2: Gaussian Elimination
- Chapter 7.3: Pivoting Strategies
- Chapter 7.4: Matrix Norms and Condition Numbers
- Chapter 7.5: Orthogonal Transformations
- Chapter 7.6: The Eigenvalue Problem
- Chapter 7.7: Least Squares Problems
Linear Algebra with Applications 9th Edition - Solutions by Chapter
Full solutions for Linear Algebra with Applications | 9th Edition
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here