- Chapter 1: Equations and Inequalities
- Chapter 1-1: Expressions and Formulas
- Chapter 1-2: Properties of Real Numbers
- Chapter 1-3: Solving Equations
- Chapter 1-4: Solving Absolute Value Equations
- Chapter 1-5: Solving Inequalities
- Chapter 1-6: Solving Compound and Absolute Value Inequalities
- Chapter 10: Conic Sections
- Chapter 10-1: Midpoint and Distance Formulas
- Chapter 10-2: Parabolas
- Chapter 10-3: Circles
- Chapter 10-4: Ellipses
- Chapter 10-5: Hyperbolas
- Chapter 10-6: Conic Sections
- Chapter 10-7: Solving Quadratic Systems
- Chapter 11: Sequences and Series
- Chapter 11-1: Arithmetic Sequences
- Chapter 11-2: Arithmetic Series
- Chapter 11-3: Geometric Sequences
- Chapter 11-4: Geometric Series
- Chapter 11-5: Infinite Geometric Series
- Chapter 11-6: Recursion and Special Sequences
- Chapter 11-7: The Binomial Theorem
- Chapter 11-8: Proof and Mathematical Induction
- Chapter 12: Probability and Statistics
- Chapter 12-1: The Counting Principle
- Chapter 12-10: Sampling and Error
- Chapter 12-2: Permutations and Combinations
- Chapter 12-3: Probability
- Chapter 12-4: Multiplying Probabilities
- Chapter 12-5: Adding Probabilities
- Chapter 12-6: Statistical Measures
- Chapter 12-7: The Normal Distribution
- Chapter 12-8: Exponential and Binomial Distribu
- Chapter 12-9: Binomial Experiments
- Chapter 13: Trigonometric Functions
- Chapter 13-1: Right Triangle Trigonometry
- Chapter 13-2: Angles and Angle Measure
- Chapter 13-3: Trigonometric Functions of General Angles
- Chapter 13-4: Law of Sines
- Chapter 13-5: Law of Cosines
- Chapter 13-6: Circular Functions
- Chapter 13-7: Inverse Trigonometric Functions
- Chapter 14: Trigonometric Graphs and Identities
- Chapter 14-1: Graphing Trigonometric Functions
- Chapter 14-2: Translations of Trigonometric Graphs
- Chapter 14-3: Trigonometric Identities
- Chapter 14-4: Verifying Trigonometric Identities
- Chapter 14-5: Sum and Differences of Angles Formulas
- Chapter 14-6: Double-Angle and Half-Angle Formulas
- Chapter 14-7: Solving Trigonometric Equations
- Chapter 2: Linear Relations and Functions
- Chapter 2-1: Relations and Functions
- Chapter 2-2: Linear Equations
- Chapter 2-3: Slope
- Chapter 2-4: Writing Linear Equations
- Chapter 2-5: Statistics: Using Scatter Plots
- Chapter 2-6: Special Functions
- Chapter 2-7: Graphing Inequalities
- Chapter 3: Systems of Equations and Inequalities
- Chapter 3-1: Solving Systems of Equations by Graphing
- Chapter 3-2: Solving Systems of Equations Algebraically
- Chapter 3-3: Solving Systems of Inequalities by Graphing
- Chapter 3-4: Linear Programming
- Chapter 3-5: Solving Systems of Equations in Three Variables
- Chapter 4: Matrices
- Chapter 4-1: Introduction to Matrices
- Chapter 4-2: Operations with Matrices
- Chapter 4-3: Multiplying Matrices
- Chapter 4-4: Transformations with Matrices
- Chapter 4-5: Determinants
- Chapter 4-6: Cramers Rule
- Chapter 4-7: Identity and Inverse Matrices
- Chapter 4-8: Using Matrices to Solve Systems of Equations
- Chapter 5: Quadratic Functions and Inequalities
- Chapter 5-1: Graphing Quadratic Functions
- Chapter 5-2: Solving Quadratic Equations by Graphing
- Chapter 5-3: Solving Quadratic Equations by Factoring
- Chapter 5-4: Complex Numbers
- Chapter 5-5: Completing the Square
- Chapter 5-6: The Quadratic Formula and the Discriminant
- Chapter 5-7: Analyzing Graphs of Quadratic Functions
- Chapter 5-8: Graphing and Solving Quadratic Inequalities
- Chapter 6: Polynomial Functions
- Chapter 6-1: Properties of Exponents
- Chapter 6-2: Operations with Polynomials
- Chapter 6-3: Dividing Polynomials
- Chapter 6-4: Polynomial Functions
- Chapter 6-5: Analyzing Graphs of Polynomial Functions
- Chapter 6-6: Solving Polynomial Equations
- Chapter 6-7: The Remainder and Factor Theorems
- Chapter 6-8: Roots and Zeros
- Chapter 6-9: Rational Zero Theorem
- Chapter 7: Radical Equations and Inequalities
- Chapter 7-1: Operations on Functions
- Chapter 7-2: Inverse Functions and Relations
- Chapter 7-3: Square Root Functions and Inequalities
- Chapter 7-4: n th Roots
- Chapter 7-5: Operations with Radical Expressions
- Chapter 7-6: Rational Exponents
- Chapter 7-7: Solving Radical Equations and Inequalities
- Chapter 8: Rational Expressions and Equations
- Chapter 8-1: Multiplying and Dividing Rational Expressions
- Chapter 8-2: Adding and Subtracting Rational Expressions
- Chapter 8-3: Graphing Rational Functions
- Chapter 8-4: Direct, Joint, and Inverse Variation
- Chapter 8-5: Classes of Functions
- Chapter 8-6: Solving Rational Equations and Inequalities
- Chapter 9: Exponential and Logarithmic Relations
- Chapter 9-1: Exponential Functions
- Chapter 9-2: Logarithms and Logarithmic Functions
- Chapter 9-3: Properties of Logarithms
- Chapter 9-4: Common Logarithms
- Chapter 9-5: Base e and Natural Logarithms
- Chapter 9-6: Exponential Growth and Decay
Algebra 2, Student Edition (MERRILL ALGEBRA 2) 1st Edition - Solutions by Chapter
Full solutions for Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition
ISBN: 9780078738302
Algebra 2, Student Edition (MERRILL ALGEBRA 2) | 1st Edition - Solutions by Chapter
Get Full SolutionsSince problems from 115 chapters in Algebra 2, Student Edition (MERRILL ALGEBRA 2) have been answered, more than 194250 students have viewed full step-by-step answer. Algebra 2, Student Edition (MERRILL ALGEBRA 2) was written by and is associated to the ISBN: 9780078738302. This expansive textbook survival guide covers the following chapters: 115. This textbook survival guide was created for the textbook: Algebra 2, Student Edition (MERRILL ALGEBRA 2), edition: 1. The full step-by-step solution to problem in Algebra 2, Student Edition (MERRILL ALGEBRA 2) were answered by , our top Math solution expert on 01/30/18, 04:22PM.
-
Condition number
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
-
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
-
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
-
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
-
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
-
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
-
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
-
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
-
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
-
Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.
-
Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
-
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
-
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
-
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
-
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
-
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
-
Toeplitz matrix.
Constant down each diagonal = time-invariant (shift-invariant) filter.
-
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
-
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.
-
Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.