- Chapter 1: Equations and Inequalities
- Chapter 1.1: Graphs and Graphing Utilities
- Chapter 1.2: Linear Equations and Rational Equations
- Chapter 1.3: Models and Applications
- Chapter 1.4: Complex Numbers
- Chapter 1.5: Quadratic Equations
- Chapter 1.6: Other Types of Equations
- Chapter 1.7: Linear Inequalities and Absolute Value Inequalities
- Chapter 2: Functions and Graphs
- Chapter 2.1: Basics of Functions and Their Graphs
- Chapter 2.2: More on Functions and Their Graphs
- Chapter 2.3: Linear Functions and Slope
- Chapter 2.4: More on Slope
- Chapter 2.5: Transformations of Functions
- Chapter 2.6: Combinations of Functions; Composite Functions
- Chapter 2.7: Inverse Functions
- Chapter 2.8: Distance and Midpoint Formulas; Circles
- Chapter 3: Polynomial and Rational Functions
- Chapter 3.1: Quadratic Functions
- Chapter 3.2: Polynomial Functions and Their Graphs
- Chapter 3.3: Dividing Polynomials; Remainder and Factor Theorems
- Chapter 3.4: Zeros of Polynomial Functions
- Chapter 3.5: Rational Functions and Their Graphs
- Chapter 3.6: Polynomial and Rational Inequalities
- Chapter 3.7: Modeling Using Variation
- Chapter 4: Exponential and Logarithmic Functions
- Chapter 4.1: Exponential Functions
- Chapter 4.2: Logarithmic Functions
- Chapter 4.3: Properties of Logarithms
- Chapter 4.4: Exponential and Logarithmic Equations
- Chapter 4.5: Exponential Growth and Decay; Modeling Data
- Chapter 5: Systems of Equations and Inequalities
- Chapter 5.1: Systems of Linear Equations in Two Variables
- Chapter 5.2: Systems of Linear Equations in Three Variables
- Chapter 5.3: Partial Fractions
- Chapter 5.4: Systems of Nonlinear Equations in Two Variables
- Chapter 5.5: Systems of Inequalities
- Chapter 5.6: Linear Programming
- Chapter 6: Matrices and Determinants
- Chapter 6.1: Matrix Solutions to Linear Systems
- Chapter 6.2: Inconsistent and Dependent Systems and Their Applications
- Chapter 6.3: Matrix Operations and Their Applications
- Chapter 6.4: Multiplicative Inverses of Matrices and Matrix Equations
- Chapter 6.5: Determinants and Cramers Rule
- Chapter 7: Conic Sections
- Chapter 7.1: The Ellipse
- Chapter 7.2: The Hyperbola
- Chapter 7.3: The Parabola
- Chapter 8: Sequences, Induction, and Probability
- Chapter 8.1: Sequences and Summation Notation
- Chapter 8.2: Arithmetic Sequences
- Chapter 8.3: Geometric Sequences and Series
- Chapter 8.4: Mathematical Induction
- Chapter 8.5: The Binomial Theorem
- Chapter 8.6: Counting Principles, Permutations, and Combinations
- Chapter 8.7: Probability
- Chapter P: Prerequisites: Fundamental Concepts of Algebra
- Chapter P.1: Algebraic Expressions, Mathematical Models, and Real Numbers
- Chapter P.2: Exponents and Scientific Notation
- Chapter P.3: Radicals and Rational Exponents
- Chapter P.4: Polynomials
- Chapter P.5: Factoring Polynomials
- Chapter P.6: Rational Expressions
College Algebra 7th Edition - Solutions by Chapter
Full solutions for College Algebra | 7th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
A directed graph that has constants Cl, ... , Cm associated with the edges.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Solvable system Ax = b.
The right side b is in the column space of A.
Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·