- Chapter 1: Equations, Inequalities, and Mathematical Modeling
- Chapter 1.1: GRAPHS OF EQUATIONS
- Chapter 1.2: LINEAR EQUATIONS IN ONE VARIABLE
- Chapter 1.3: MODELING WITH LINEAR EQUATIONS
- Chapter 1.4: QUADRATIC EQUATIONS AND APPLICATIONS
- Chapter 1.5: COMPLEX NUMBERS
- Chapter 1.6: OTHER TYPES OF EQUATIONS
- Chapter 1.7: LINEAR INEQUALITIES IN ONE VARIABLE
- Chapter 1.8: OTHER TYPES OF INEQUALITIES
- Chapter 2: Functions and Their Graphs
- Chapter 2.1: LINEAR EQUATIONS IN TWO VARIABLES
- Chapter 2.2: FUNCTIONS
- Chapter 2.3: ANALYZING GRAPHS OF FUNCTIONS
- Chapter 2.4: A LIBRARY OF PARENT FUNCTIONS
- Chapter 2.5: TRANSFORMATIONS OF FUNCTIONS
- Chapter 2.6: COMBINATIONS OF FUNCTIONS: COMPOSITE FUNCTIONS
- Chapter 2.7: INVERSE FUNCTIONS
- Chapter 3: Polynomial Functions
- Chapter 3.1: QUADRATIC FUNCTIONS AND MODELS
- Chapter 3.2: POLYNOMIAL FUNCTIONS OF HIGHER DEGREE
- Chapter 3.3: POLYNOMIAL AND SYNTHETIC DIVISION
- Chapter 3.4: ZEROS OF POLYNOMIAL FUNCTIONS
- Chapter 3.5: MATHEMATICAL MODELING AND VARIATION
- Chapter 4: Rational Functions and Conics
- Chapter 4.1: RATIONAL FUNCTIONS AND ASYMPTOTES
- Chapter 4.2: GRAPHS OF RATIONAL FUNCTIONS
- Chapter 4.3: CONICS
- Chapter 4.4: TRANSLATIONS OF CONICS
- Chapter 5: Exponential and Logarithmic Functions
- Chapter 5.1: Exponential Functions and Their Graphs
- Chapter 5.2: Logarithmic Functions and Their Graphs
- Chapter 5.3: Properties of Logarithms
- Chapter 5.4: Exponential and Logarithmic Equations
- Chapter 5.5: Exponential and Logarithmic Models
- Chapter 6: Systems of Equations and Inequalities
- Chapter 6.1: Linear and Nonlinear Systems of Equations
- Chapter 6.2: Two-Variable Linear Systems
- Chapter 6.3: Multivariable Linear Systems
- Chapter 6.4: Partial Fractions
- Chapter 6.5: Systems of Inequalities
- Chapter 6.6: Linear Programming
- Chapter 7: Matrices and Determinants
- Chapter 7.1: Matrices and Systems of Equations
- Chapter 7.2: Operations with Matrices
- Chapter 7.3: The Inverse of a Square Matrix
- Chapter 7.4: The Determinant of a Square Matrix
- Chapter 7.5: Applications of Matrices and Determinants
- Chapter 8: Sequences, Series, and Probability
- Chapter 8.1: Sequences and Series
- Chapter 8.2: Arithmetic Sequences and Partial Sums
- Chapter 8.3: Geometric Sequences and Series
- Chapter 8.4: Mathematical Induction
- Chapter 8.5: The Binomial Theorem
- Chapter 8.6: Counting Principles
- Chapter 8.7: Probability
- Chapter P: Prerequisites
- Chapter P.1: Review of Real Numbers and Their Properties
- Chapter P.2: Exponents and Radicals
- Chapter P.3: Polynomials and Special Products
- Chapter P.4: Factoring Polynomials
- Chapter P.5: Rational Expressions
- Chapter P.6: The Rectangular Coordinate System and Graphs
College Algebra 8th Edition - Solutions by Chapter
Full solutions for College Algebra | 8th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
Free columns of A.
Columns without pivots; these are combinations of earlier columns.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Outer product uv T
= column times row = rank one matrix.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Solvable system Ax = b.
The right side b is in the column space of A.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here