- Chapter 1.1: Lines and Linear Equations
- Chapter 1.2: Linear Systems and Matrices
- Chapter 1.3: Numerical Solutions
- Chapter 1.4: Applications of Linear Systems
- Chapter 10.1: Inner Products
- Chapter 10.2: The GramSchmidt Process Revisited
- Chapter 10.3: Applications of Inner Products
- Chapter 11.1: Quadratic Forms
- Chapter 11.2: Positive Definite Matrices
- Chapter 11.3: Constrained Optimization
- Chapter 11.4: Complex Vector Spaces
- Chapter 11.5: Hermitian Matrices
- Chapter 2.1: Vectors
- Chapter 2.2: Span
- Chapter 2.3: Linear Independence
- Chapter 3.1: Linear Transformations
- Chapter 3.2: Matrix Algebra
- Chapter 3.3: Inverses
- Chapter 3.4: LU Factorization
- Chapter 3.5: Markov Chains
- Chapter 4.1: Introduction to Subspaces
- Chapter 4.2: Basis and Dimension
- Chapter 4.3: Row and Column Spaces
- Chapter 5.1: The Determinant Function
- Chapter 5.2: Properties of the Determinant
- Chapter 5.3: Applications of the Determinant
- Chapter 6.1: Eigenvalues and Eigenvectors
- Chapter 6.2: Approximation Methods
- Chapter 6.3: Change of Basis
- Chapter 6.4: Diagonalization
- Chapter 6.5: Complex Eigenvalues
- Chapter 6.6: Systems of Differential Equations
- Chapter 7.1: Vector Spaces and Subspaces
- Chapter 7.2: Span and Linear Independence
- Chapter 7.3: Basis and Dimension
- Chapter 8.1: Dot Products and Orthogonal Sets
- Chapter 8.2: Projection and the Gram--Schmidt Process
- Chapter 8.3: Diagonalizing Symmetric Matrices and QR Factorization
- Chapter 8.4: The Singular Value Decomposition
- Chapter 8.5: Least Squares Regression
- Chapter 9.1: Definition and Properties
- Chapter 9.2: Isomorphisms
- Chapter 9.3: The Matrix of a Linear Transformation
- Chapter 9.4: Similarity
Linear Algebra with Applications 1st Edition - Solutions by Chapter
Full solutions for Linear Algebra with Applications | 1st Edition
Tv = Av + Vo = linear transformation plus shift.
Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
A = CTC = (L.J]))(L.J]))T for positive definite A.
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
A directed graph that has constants Cl, ... , Cm associated with the edges.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.