 Chapter 1: Equations and Inequalities
 Chapter 1.1: Graphs and Graphing Utilities
 Chapter 1.2: Linear Equations and Rational Equations
 Chapter 1.3: Models and Applications
 Chapter 1.4: Complex Numbers
 Chapter 1.5: Quadratic Equations
 Chapter 1.6: Other Types of Equations
 Chapter 1.7: Linear Inequalities and Absolute Value Inequalities
 Chapter 2: Functions and Graphs
 Chapter 2.1: Basics of Functions and Their Graphs
 Chapter 2.2: More on Functions and Their Graphs
 Chapter 2.3: Linear Functions and Slope
 Chapter 2.4: More on Slope
 Chapter 2.5: Transformations of Functions
 Chapter 2.6: Combinations of Functions; Composite Functions
 Chapter 2.7: Inverse Functions
 Chapter 2.8: Distance and Midpoint Formulas; Circles
 Chapter 3: Polynomial and Rational Functions
 Chapter 3.1: Quadratic Functions
 Chapter 3.2: Polynomial Functions and Their Graphs
 Chapter 3.3: Dividing Polynomials; Remainder and Factor Theorems
 Chapter 3.4: Zeros of Polynomial Functions
 Chapter 3.5: Rational Functions and Their Graphs
 Chapter 3.6: Polynomial and Rational Inequalities
 Chapter 3.7: Modeling Using Variation
 Chapter 4: Exponential and Logarithmic Functions
 Chapter 4.1: Exponential Functions
 Chapter 4.2: Logarithmic Functions
 Chapter 4.3: Properties of Logarithms
 Chapter 4.4: Exponential and Logarithmic Equations
 Chapter 4.5: Exponential Growth and Decay; Modeling Data
 Chapter 5: Systems of Equations and Inequalities
 Chapter 5.1: Systems of Linear Equations in Two Variables
 Chapter 5.2: Systems of Linear Equations in Three Variables
 Chapter 5.3: Partial Fractions
 Chapter 5.4: Systems of Nonlinear Equations in Two Variables
 Chapter 5.5: Systems of Inequalities
 Chapter 5.6: Linear Programming
 Chapter 6: Matrices and Determinants
 Chapter 6.1: Matrix Solutions to Linear Systems
 Chapter 6.2: Inconsistent and Dependent Systems and Their Applications
 Chapter 6.3: Matrix Operations and Their Applications
 Chapter 6.4: Multiplicative Inverses of Matrices and Matrix Equations
 Chapter 6.5: Determinants and Cramers Rule
 Chapter 7: Conic Sections
 Chapter 7.1: The Ellipse
 Chapter 7.2: The Hyperbola
 Chapter 7.3: The Parabola
 Chapter 8: Sequences, Induction, and Probability
 Chapter 8.1: Sequences and Summation Notation
 Chapter 8.2: Arithmetic Sequences
 Chapter 8.3: Geometric Sequences and Series
 Chapter 8.4: Mathematical Induction
 Chapter 8.5: The Binomial Theorem
 Chapter 8.6: Counting Principles, Permutations, and Combinations
 Chapter 8.7: Probability
 Chapter P: Prerequisites: Fundamental Concepts of Algebra
 Chapter P.1: Algebraic Expressions, Mathematical Models, and Real Numbers
 Chapter P.2: Exponents and Scientific Notation
 Chapter P.3: Radicals and Rational Exponents
 Chapter P.4: Polynomials
 Chapter P.5: Factoring Polynomials
 Chapter P.6: Rational Expressions
College Algebra 7th Edition  Solutions by Chapter
Full solutions for College Algebra  7th Edition
ISBN: 9780134469164
College Algebra  7th Edition  Solutions by Chapter
Get Full SolutionsThis textbook survival guide was created for the textbook: College Algebra , edition: 7. Since problems from 63 chapters in College Algebra have been answered, more than 8534 students have viewed full stepbystep answer. College Algebra was written by Patricia and is associated to the ISBN: 9780134469164. The full stepbystep solution to problem in College Algebra were answered by Patricia, our top Math solution expert on 03/08/18, 08:30PM. This expansive textbook survival guide covers the following chapters: 63.

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Solvable system Ax = b.
The right side b is in the column space of A.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
I don't want to reset my password
Need help? Contact support
Having trouble accessing your account? Let us help you, contact support at +1(510) 9441054 or support@studysoup.com
Forgot password? Reset it here