- Chapter 1: The Language and Tools of Algebra
- Chapter 1-1: Variables and Expressions
- Chapter 1-2: Order of Operations
- Chapter 1-3: Open Sentences
- Chapter 1-4: Identity and Equality Properties
- Chapter 1-5: The Distributive Property
- Chapter 1-6: Commutative and Associative Properties
- Chapter 1-7: Logical Reasoning and Counterexamples
- Chapter 1-8: Number Systems
- Chapter 1-9: Functions and Graphs
- Chapter 10: Radical Expressions and Triangles
- Chapter 10-1: Simplifying Radical Expressions
- Chapter 10-2: Operations with Radical Expressions
- Chapter 10-3: Radical Equations
- Chapter 10-4: The Pythagorean Theorem
- Chapter 10-5: The Distance Formula
- Chapter 10-6: Similar Triangles
- Chapter 11: Rational Expressions and Equations
- Chapter 11-1: Inverse Variation
- Chapter 11-2: Rational Expressions
- Chapter 11-3: Multiplying Rational Expressions
- Chapter 11-4: Dividing Rational Expressions
- Chapter 11-5: Dividing Polynomials
- Chapter 11-6: Rational Expressions with Like Denominators
- Chapter 11-7: Rational Expressions with Unlike Denominators
- Chapter 11-8: Mixed Expressions and Complex Fractions
- Chapter 11-9: Rational Equations and Functions
- Chapter 12: Statistics and Probability
- Chapter 12-1: Sampling and Bias
- Chapter 12-2: Counting Outcomes
- Chapter 12-3: Permutations and Combinations
- Chapter 12-4: Probability of Compound Events
- Chapter 12-5: Probability Distributions
- Chapter 12-6: Probability Simulations
- Chapter 2: Solving Linear Equations
- Chapter 2-1: Writing Equations
- Chapter 2-2: Solving Addition and Subtraction Equations
- Chapter 2-3: Solving Equations by Using Multiplication and Division
- Chapter 2-4: Solving Multi-Step Equations
- Chapter 2-5: Solving Equations with the Variable on Each Side
- Chapter 2-6: Ratios and Proportions
- Chapter 2-7: Percent of Change
- Chapter 2-8: Solving for a Specific Variable
- Chapter 2-9: Weighted Averages
- Chapter 3: Functions and Patterns
- Chapter 3-1: Modeling Relations
- Chapter 3-2: Representing Functions
- Chapter 3-3: Linear Functions
- Chapter 3-4: Arithmetic Sequences
- Chapter 3-5: Proportional and Nonproportional Relationships
- Chapter 4: Analyzing Linear Equations
- Chapter 4-1: Steepness of a Line
- Chapter 4-2: Slope and Direct Variation
- Chapter 4-3: Investigating Slope-Intercept Form
- Chapter 4-4: Writing Equations in Slope-Intercept Form
- Chapter 4-5: Writing Equations in Point-Slope Form
- Chapter 4-6: Statistics: Scatter Plots and Lines of Fit
- Chapter 4-7: Geometry: Parallel and Perpendicular Lines
- Chapter 5: Solving Systems of Linear Equations
- Chapter 5-1: Graphing Systems of Equations
- Chapter 5-2: Substitution
- Chapter 5-3: Elimination Using Addition and Subtraction
- Chapter 5-4: Elimination Using Multiplication
- Chapter 5-5: Applying Systems of Linear Equations
- Chapter 6: Solving Linear Inequalities
- Chapter 6-1: Solving Inequalities by Addition and Subtraction
- Chapter 6-2: Solving Inequalities by Multiplication and Division
- Chapter 6-3: Solving Multi-Step Inequalities
- Chapter 6-4: Solving Compound Inequalities
- Chapter 6-5: Solving Open Sentences Involving Absolute Value
- Chapter 6-6: Solving Inequalities Involving Absolute Value
- Chapter 6-7: Graphing Inequalities in Two Variables
- Chapter 6-8: Graphing Systems of Inequalities
- Chapter 7: Polynomials
- Chapter 7-1: Multiplying Monomials
- Chapter 7-2: Dividing Monomials
- Chapter 7-3: Polynomials
- Chapter 7-4: Adding and Subtracting Polynomials
- Chapter 7-5: Multiplying a Polynomial by a Monomial
- Chapter 7-6: Multiplying Polynomials
- Chapter 7-7: Special Products
- Chapter 8: Factoring
- Chapter 8-1: Monomials and Factoring
- Chapter 8-2: Factoring Using the Distributive Property
- Chapter 8-3: Factoring Trinomials: x 2 + bx + c
- Chapter 8-4: Factoring Trinomials: ax 2 + bx + c
- Chapter 8-5: Factoring Differences of Squares
- Chapter 8-6: Perfect Squares and Factoring
- Chapter 9: Quadratic and Exponential Functions
- Chapter 9-1: Graphing Quadratic Functions
- Chapter 9-2: Solving Quadratic Equations by Graphing
- Chapter 9-3: Solving Quadratic Equations by Completing the Square
- Chapter 9-4: Solving Quadratic Equations by Using the Quadratic Formula
- Chapter 9-5: Exponential Functions
- Chapter 9-6: Growth and Decay
Algebra 1, Student Edition (MERRILL ALGEBRA 1) 1st Edition - Solutions by Chapter
Full solutions for Algebra 1, Student Edition (MERRILL ALGEBRA 1) | 1st Edition
Algebra 1, Student Edition (MERRILL ALGEBRA 1) | 1st Edition - Solutions by ChapterGet Full Solutions
Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).
Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or -1) based on the number of row exchanges to reach I.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Special solutions to As = O.
One free variable is Si = 1, other free variables = o.
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.