- Chapter 1: Equations and Inequalities
- Chapter 1.1: Graphs and Graphing Utilities
- Chapter 1.2: Linear Equations and Rational Equations
- Chapter 1.3: Models and Applications
- Chapter 1.4: Complex Numbers
- Chapter 1.5: Quadratic Equations
- Chapter 1.6: Other Types of Equations
- Chapter 1.7: Linear Inequalities and Absolute Value Inequalities
- Chapter 2: Functions and Graphs
- Chapter 2.1: Basics of Functions and Their Graphs
- Chapter 2.2: More on Functions and Their Graphs
- Chapter 2.3: Linear Functions and Slope
- Chapter 2.4: More on Slope
- Chapter 2.5: Transformations of Functions
- Chapter 2.6: Combinations of Functions; Composite Functions
- Chapter 2.7: Inverse Functions
- Chapter 2.8: Distance and Midpoint Formulas; Circles
- Chapter 3: Polynomial and Rational Functions
- Chapter 3.1: Quadratic Functions
- Chapter 3.2: Polynomial Functions and Their Graphs
- Chapter 3.3: Dividing Polynomials; Remainder and Factor Theorems
- Chapter 3.4: Zeros of Polynomial Functions
- Chapter 3.5: Rational Functions and Their Graphs
- Chapter 3.6: Polynomial and Rational Inequalities
- Chapter 3.7: Modeling Using Variation
- Chapter 4: Exponential and Logarithmic Functions
- Chapter 4.1: Exponential Functions
- Chapter 4.2: Logarithmic Functions
- Chapter 4.3: Properties of Logarithms
- Chapter 4.4: Exponential and Logarithmic Equations
- Chapter 4.5: Exponential Growth and Decay; Modeling Data
- Chapter 5: Systems of Equations and Inequalities
- Chapter 5.1: Systems of Linear Equations in Two Variables
- Chapter 5.2: Systems of Linear Equations in Three Variables
- Chapter 5.3: Partial Fractions
- Chapter 5.4: Systems of Nonlinear Equations in Two Variables
- Chapter 5.5: Systems of Inequalities
- Chapter 5.6: Linear Programming
- Chapter 6: Matrices and Determinants
- Chapter 6.1: Matrix Solutions to Linear Systems
- Chapter 6.2: Inconsistent and Dependent Systems and Their Applications
- Chapter 6.3: Matrix Operations and Their Applications
- Chapter 6.4: Multiplicative Inverses of Matrices and Matrix Equations
- Chapter 6.5: Determinants and Cramers Rule
- Chapter 7: Conic Sections
- Chapter 7.1: The Ellipse
- Chapter 7.2: The Hyperbola
- Chapter 7.3: The Parabola
- Chapter 8: Sequences, Induction, and Probability
- Chapter 8.1: Sequences and Summation Notation
- Chapter 8.2: Arithmetic Sequences
- Chapter 8.3: Geometric Sequences and Series
- Chapter 8.4: Mathematical Induction
- Chapter 8.5: The Binomial Theorem
- Chapter 8.6: Counting Principles, Permutations, and Combinations
- Chapter 8.7: Probability
- Chapter P: Prerequisites: Fundamental Concepts of Algebra
- Chapter P.1: Algebraic Expressions, Mathematical Models, and Real Numbers
- Chapter P.2: Exponents and Scientific Notation
- Chapter P.3: Radicals and Rational Exponents
- Chapter P.4: Polynomials
- Chapter P.5: Factoring Polynomials
- Chapter P.6: Rational Expressions
College Algebra 7th Edition - Solutions by Chapter
Full solutions for College Algebra | 7th Edition
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
Unitary matrix UH = U T = U-I.
Orthonormal columns (complex analog of Q).
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or email@example.com
Forgot password? Reset it here