- Chapter 1: Linear Equations
- Chapter 1.1: Introduction to Linear Systems
- Chapter 1.2: Matrices, Vectors, and Gauss-Jordan Elimination
- Chapter 1.3: On the Solutions of Linear Systems; Matrix Algebra
- Chapter 2: Linear Transformations
- Chapter 2.1: Introduction to Linear Transformations and Their Inverses
- Chapter 2.2: Linear Transformations in Geometry
- Chapter 2.3: Matrix Products
- Chapter 2.4: The Inverse of a Linear Transformation
- Chapter 3.1: Image and Kernel of a Linear Transformation
- Chapter 3.2: Subspaces of R"; Bases and Linear Independence
- Chapter 3.3: The Dimension of a Subspace of R"
- Chapter 3.4: Coordinates
- Chapter 4: Linear Spaces
- Chapter 4.1: Introduction to Linear Spaces
- Chapter 4.2: Linear Transformations and Isomorphisms
- Chapter 4.3: Th e Matrix of a Linear Transformation
- Chapter 5: Orthogonality and Least Squares
- Chapter 5.1: Orthogonal Projections and Orthonormal Bases
- Chapter 5.2: Gram-Schmidt Process and QR Factorization
- Chapter 5.3: Orthogonal Transformations and Orthogonal Matrices
- Chapter 5.4: Least Squares and Data Fitting
- Chapter 5.5: Inner Product Spaces
- Chapter 6: Determinants
- Chapter 6.1: Introduction to Determinants
- Chapter 6.2: Properties of the Determinant
- Chapter 6.3: Geometrical Interpretations of the Determinant; Cramers Rule
- Chapter 7: Eigenvalues and Eigenvectors
- Chapter 7.1: Dynamical Systems and Eigenvectors: An Introductory Example
- Chapter 7.2: Finding the Eigenvalues of a Matrix
- Chapter 7.3: Finding the Eigenvectors of a Matrix
- Chapter 7.4: Diagonalization
- Chapter 7.5: Complex Eigenvalues
- Chapter 7.6: Stability
- Chapter 8: Symmetric Matrices and Quadratic Forms
- Chapter 8.1: Symmetric Matrices
- Chapter 8.2: Quadratic Forms
- Chapter 8.3: Singular Values
- Chapter 9.1: An Introduction to Continuous Dynamical Systems
- Chapter 9.2: The Complex Case: Eulers Formula
- Chapter 9.3: Linear Differential Operators and Linear Differential Equations
Linear Algebra with Applications 4th Edition - Solutions by Chapter
Full solutions for Linear Algebra with Applications | 4th Edition
Upper triangular systems are solved in reverse order Xn to Xl.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Invert A by row operations on [A I] to reach [I A-I].
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!