- Chapter 1: Equations and Inequalities
- Chapter 1.1: Graphs and Graphing Utilities
- Chapter 1.2: Linear Equations and Rational Equations
- Chapter 1.3: Models and Applications
- Chapter 1.4: Complex Numbers
- Chapter 1.5: Quadratic Equations
- Chapter 1.6: Other Types of Equations
- Chapter 1.7: Linear Inequalities and Absolute Value Inequalities
- Chapter 2: Functions and Graphs
- Chapter 2.1: Basics of Functions and Their Graphs
- Chapter 2.2: More on Functions and Their Graphs
- Chapter 2.3: Linear Functions and Slope
- Chapter 2.4: More on Slope
- Chapter 2.5: Transformations of Functions
- Chapter 2.6: Combinations of Functions; Composite Functions
- Chapter 2.7: Inverse Functions
- Chapter 2.8: Distance and Midpoint Formulas; Circles
- Chapter 3: Polynomial and Rational Functions
- Chapter 3.1: Quadratic Functions
- Chapter 3.2: Polynomial Functions and Their Graphs
- Chapter 3.3: Dividing Polynomials; Remainder and Factor Theorems
- Chapter 3.4: Zeros of Polynomial Functions
- Chapter 3.5: Rational Functions and Their Graphs
- Chapter 3.6: Polynomial and Rational Inequalities
- Chapter 3.7: Modeling Using Variation
- Chapter 4: Exponential and Logarithmic Functions
- Chapter 4.1: Exponential Functions
- Chapter 4.2: Logarithmic Functions
- Chapter 4.3: Properties of Logarithms
- Chapter 4.4: Exponential and Logarithmic Equations
- Chapter 4.5: Exponential Growth and Decay; Modeling Data
- Chapter 5: Systems of Equations and Inequalities
- Chapter 5.1: Systems of Linear Equations in Two Variables
- Chapter 5.2: Systems of Linear Equations in Three Variables
- Chapter 5.3: Partial Fractions
- Chapter 5.4: Systems of Nonlinear Equations in Two Variables
- Chapter 5.5: Systems of Inequalities
- Chapter 5.6: Linear Programming
- Chapter 6: Matrices and Determinants
- Chapter 6.1: Matrix Solutions to Linear Systems
- Chapter 6.2: Inconsistent and Dependent Systems and Their Applications
- Chapter 6.3: Matrix Operations and Their Applications
- Chapter 6.4: Multiplicative Inverses of Matrices and Matrix Equations
- Chapter 6.5: Determinants and Cramer's Rule
- Chapter 7: Conic Sections
- Chapter 7.1: The Ellipse
- Chapter 7.2: The Hyperbola
- Chapter 7.3: The Parabola
- Chapter 8: Sequences, Induction, and Probability
- Chapter 8.1: Sequences and Summation Notation
- Chapter 8.2: Arithmetic Sequences
- Chapter 8.3: Geometric Sequences and Series
- Chapter 8.4: Mathematical Induction
- Chapter 8.5: The Binomial Theorem
- Chapter 8.6: Counting Principles, Permutations, and Combinations
- Chapter 8.7: Probability
- Chapter P: Prerequisites: Fundamental Concepts of Algebra
- Chapter P.1: Algebraic Expressions, Mathematical Models, and Real Numbers
- Chapter P.2: Exponents and Scientific Notation
- Chapter P.3: Radicals and Rational Exponents
- Chapter P.4: Polynomials
- Chapter P.5: Factoring Polynomials
- Chapter P.6: Rational Expressions
College Algebra 6th Edition - Solutions by Chapter
Full solutions for College Algebra | 6th Edition
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
Dimension of vector space
dim(V) = number of vectors in any basis for V.
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
A directed graph that has constants Cl, ... , Cm associated with the edges.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Every v in V is orthogonal to every w in W.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.