- Chapter 1: Equations, Inequalities, and Mathematical Modeling
- Chapter 1.1: GRAPHS OF EQUATIONS
- Chapter 1.2: LINEAR EQUATIONS IN ONE VARIABLE
- Chapter 1.3: MODELING WITH LINEAR EQUATIONS
- Chapter 1.4: QUADRATIC EQUATIONS AND APPLICATIONS
- Chapter 1.5: COMPLEX NUMBERS
- Chapter 1.6: OTHER TYPES OF EQUATIONS
- Chapter 1.7: LINEAR INEQUALITIES IN ONE VARIABLE
- Chapter 1.8: OTHER TYPES OF INEQUALITIES
- Chapter 2: Functions and Their Graphs
- Chapter 2.1: LINEAR EQUATIONS IN TWO VARIABLES
- Chapter 2.2: FUNCTIONS
- Chapter 2.3: ANALYZING GRAPHS OF FUNCTIONS
- Chapter 2.4: A LIBRARY OF PARENT FUNCTIONS
- Chapter 2.5: TRANSFORMATIONS OF FUNCTIONS
- Chapter 2.6: COMBINATIONS OF FUNCTIONS: COMPOSITE FUNCTIONS
- Chapter 2.7: INVERSE FUNCTIONS
- Chapter 3: Polynomial Functions
- Chapter 3.1: QUADRATIC FUNCTIONS AND MODELS
- Chapter 3.2: POLYNOMIAL FUNCTIONS OF HIGHER DEGREE
- Chapter 3.3: POLYNOMIAL AND SYNTHETIC DIVISION
- Chapter 3.4: ZEROS OF POLYNOMIAL FUNCTIONS
- Chapter 3.5: MATHEMATICAL MODELING AND VARIATION
- Chapter 4: Rational Functions and Conics
- Chapter 4.1: RATIONAL FUNCTIONS AND ASYMPTOTES
- Chapter 4.2: GRAPHS OF RATIONAL FUNCTIONS
- Chapter 4.3: CONICS
- Chapter 4.4: TRANSLATIONS OF CONICS
- Chapter 5: Exponential and Logarithmic Functions
- Chapter 5.1: Exponential Functions and Their Graphs
- Chapter 5.2: Logarithmic Functions and Their Graphs
- Chapter 5.3: Properties of Logarithms
- Chapter 5.4: Exponential and Logarithmic Equations
- Chapter 5.5: Exponential and Logarithmic Models
- Chapter 6: Systems of Equations and Inequalities
- Chapter 6.1: Linear and Nonlinear Systems of Equations
- Chapter 6.2: Two-Variable Linear Systems
- Chapter 6.3: Multivariable Linear Systems
- Chapter 6.4: Partial Fractions
- Chapter 6.5: Systems of Inequalities
- Chapter 6.6: Linear Programming
- Chapter 7: Matrices and Determinants
- Chapter 7.1: Matrices and Systems of Equations
- Chapter 7.2: Operations with Matrices
- Chapter 7.3: The Inverse of a Square Matrix
- Chapter 7.4: The Determinant of a Square Matrix
- Chapter 7.5: Applications of Matrices and Determinants
- Chapter 8: Sequences, Series, and Probability
- Chapter 8.1: Sequences and Series
- Chapter 8.2: Arithmetic Sequences and Partial Sums
- Chapter 8.3: Geometric Sequences and Series
- Chapter 8.4: Mathematical Induction
- Chapter 8.5: The Binomial Theorem
- Chapter 8.6: Counting Principles
- Chapter 8.7: Probability
- Chapter P: Prerequisites
- Chapter P.1: Review of Real Numbers and Their Properties
- Chapter P.2: Exponents and Radicals
- Chapter P.3: Polynomials and Special Products
- Chapter P.4: Factoring Polynomials
- Chapter P.5: Rational Expressions
- Chapter P.6: The Rectangular Coordinate System and Graphs
College Algebra 8th Edition - Solutions by Chapter
Full solutions for College Algebra | 8th Edition
Tv = Av + Vo = linear transformation plus shift.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
peA) = det(A - AI) has peA) = zero matrix.
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n - 1)/2 edges between nodes. A tree has only n - 1 edges and no closed loops.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.