- Chapter 1: Graphs, Functions, and Models
- Chapter 1.1: Introduction to Graphing
- Chapter 1.2: Functions and Graphs
- Chapter 1.3: Linear Functions, Slope, and Applications
- Chapter 1.4: Equations of Lines and Modeling
- Chapter 1.5: Linear Equations, Functions, Zeros, and Applications
- Chapter 1.6: Solving Linear Inequalities
- Chapter 2: More on Functions
- Chapter 2.1: Increasing, Decreasing, and Piecewise Functions; Applications
- Chapter 2.2: The Algebra of Functions
- Chapter 2.3: The Composition of Functions
- Chapter 2.4: Symmetry
- Chapter 2.5: Transformations
- Chapter 2.6: Variation and Applications
- Chapter 3: Quadratic Functions and Equations; Inequalities
- Chapter 3.1: The Complex Numbers
- Chapter 3.2: Quadratic Equations, Functions, Zeros, and Models
- Chapter 3.3: Analyzing Graphs of Quadratic Functions
- Chapter 3.4: Solving Rational Equations and Radical Equations
- Chapter 3.5: Solving Equations and Inequalities with Absolute Value
- Chapter 4: Polynomial Functions and Rational Functions
- Chapter 4.1: Polynomial Functions and Modeling
- Chapter 4.2: Graphing Polynomial Functions
- Chapter 4.3: Polynomial Division; The Remainder Theorem and the Factor Theorem
- Chapter 4.4: Theorems about Zeros of Polynomial Functions
- Chapter 4.5: Rational Functions
- Chapter 4.6: Polynomial Inequalities and Rational Inequalities
- Chapter 5: Exponential Functions and Logarithmic Functions
- Chapter 5.1: Inverse Functions
- Chapter 5.2: Exponential Functions and Graphs
- Chapter 5.3: Logarithmic Functions and Graphs
- Chapter 5.4: Properties of Logarithmic Functions
- Chapter 5.5: Solving Exponential Equations and Logarithmic Equations
- Chapter 5.6: Applications and Models: Growth and Decay; Compound Interest
- Chapter 6: Systems of Equations and Matrices
- Chapter 6.1: Systems of Equations in Two Variables
- Chapter 6.2: Systems of Equations in Three Variables
- Chapter 6.3: Matrices and Systems of Equations
- Chapter 6.4: Matrix Operations
- Chapter 6.5: Inverses of Matrices
- Chapter 6.6: Determinants and Cramers Rule
- Chapter 6.7: Systems of Inequalities and Linear Programming
- Chapter 6.8: Partial Fractions
- Chapter 7: Conic Sections
- Chapter 7.1: The Parabola
- Chapter 7.2: The Circle and the Ellipse
- Chapter 7.3: The Hyperbola
- Chapter 7.4: Nonlinear Systems of Equations and Inequalities
- Chapter 8: Sequences, Series, and Combinatorics
- Chapter 8.1: Sequences and Series
- Chapter 8.2: Arithmetic Sequences and Series
- Chapter 8.3: Geometric Sequences and Series
- Chapter 8.4: Mathematical Induction
- Chapter 8.5: Combinatorics: Permutations
- Chapter 8.6: Combinatorics: Combinations
- Chapter 8.7: The Binomial Theorem
- Chapter 8.8: Probability
- Chapter R: Basic Concepts of Algebra
- Chapter R.1: The Real-Number System
- Chapter R.2: Integer Exponents, Scientific Notation, and Order of Operations
- Chapter R.3: Addition, Subtraction, and Multiplication of Polynomials
- Chapter R.4: Factoring
- Chapter R.5: The Basics of Equation Solving
- Chapter R.6: Rational Expressions
- Chapter R.7: Radical Notation and Rational Exponents
College Algebra: Graphs and Models 5th Edition - Solutions by Chapter
Full solutions for College Algebra: Graphs and Models | 5th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
A sequence of steps intended to approach the desired solution.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.
Special solutions to As = O.
One free variable is Si = 1, other free variables = o.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here