- Chapter 0: Out of Chaos
- Chapter 0.1: The Same yet Smaller
- Chapter 0.2: More and More
- Chapter 0.3: Shorter yet Longer
- Chapter 0.4: Going Somewhere?
- Chapter 0.5: Out of Chaos
- Chapter 1: Data Exploration
- Chapter 1.1: Bar Graphs and Dot Plots
- Chapter 1.2: Summarizing Data with Measures of Center
- Chapter 1.3: Five-Number Summaries and Box Plots
- Chapter 1.4: Histograms and Stem-and-Leaf Plots
- Chapter 1.6: Two-Variable Data
- Chapter 1.7: Estimating
- Chapter 1.8: Using Matrices to Organize and Combine Data
- Chapter 10: Probability
- Chapter 10.1: Relative Frequency Graphs
- Chapter 10.2: Probability Outcomes and Trials
- Chapter 10.3: Random Outcomes
- Chapter 10.4: Counting Techniques
- Chapter 10.5: Multiple-Stage Experiments
- Chapter 10.6: Expected Value
- Chapter 11: Introduction to Geometry
- Chapter 11.1: Parallel and Perpendicular
- Chapter 11.2: Finding the Midpoint
- Chapter 11.3: Squares, Right Triangles, and Areas
- Chapter 11.4: The Pythagorean Theorem
- Chapter 11.5: Operations with Roots
- Chapter 11.6: A Distance Formula
- Chapter 11.7: Similar Triangles and Trigonometric Functions
- Chapter 11.8: Trigonometry
- Chapter 2: Proportional Reasoning and Variation
- Chapter 2.1: Proportions
- Chapter 2.3: Proportions and Measurement Systems
- Chapter 2.4: Direct Variation
- Chapter 2.5: Inverse Variation
- Chapter 2.7: Evaluating Expressions
- Chapter 2.8: Undoing Operations
- Chapter 3: Linear Explorations
- Chapter 3.1: Recursive Sequences
- Chapter 3.2: Linear Plots
- Chapter 3.3: Time-Distance Relationships
- Chapter 3.4: Linear Equations and the Intercept Form
- Chapter 3.5: Linear Equations and Rate of Change
- Chapter 3.6: Solving Equations Using the Balancing Method
- Chapter 4: Fitting a Line to Data
- Chapter 4.1: A Formula for Slope
- Chapter 4.2: Writing a Linear Equation to Fit Data
- Chapter 4.3: Point-Slope Form of a Linear Equation
- Chapter 4.4: Equivalent Algebraic Equations
- Chapter 4.5: Writing Point-Slope Equations to Fit Data
- Chapter 4.6: More on Modeling
- Chapter 4.7: Applications of Modeling
- Chapter 5: Systems of Equations and Inequalities
- Chapter 5.1: Solving Systems of Equations
- Chapter 5.2: Solving Systems of Equations Using Substitution
- Chapter 5.3: Solving Systems of Equations Using Elimination
- Chapter 5.4: Solving Systems of Equations Using Matrices
- Chapter 5.5: Inequalities in One Variable
- Chapter 5.6: Graphing Inequalities in Two Variables
- Chapter 5.7: Systems of Inequalities
- Chapter 6: Exponents and Exponential Models
- Chapter 6.1: Recursive Routines
- Chapter 6.2: Exponential Equations
- Chapter 6.3: Multiplication and Exponents
- Chapter 6.4: Scientific Notation for Large Numbers
- Chapter 6.5: Looking Back with Exponents
- Chapter 6.6: Zero and Negative Exponents
- Chapter 6.7: Fitting Exponential Models to Data
- Chapter 7: Functions
- Chapter 7.1: Secret Codes
- Chapter 7.2: Functions and Graphs
- Chapter 7.3: Graphs of Real-World Situations
- Chapter 7.4: Function Notation
- Chapter 7.5: Defining the AbsoluteValue Function
- Chapter 7.6: Squares, Squaring, and Parabolas
- Chapter 8: Transformations
- Chapter 8.1: Translating Points
- Chapter 8.2: Translating Graphs
- Chapter 8.3: Reflecting Points and Graphs
- Chapter 8.4: Stretching and Shrinking Graphs
- Chapter 8.6: Introduction to Rational Functions
- Chapter 8.7: Transformations with Matrices
- Chapter 9: Quadratic Models
- Chapter 9.1: Solving Quadratic Equations
- Chapter 9.2: Finding the Roots and the Vertex
- Chapter 9.3: From Vertex to General Form
- Chapter 9.4: Factored Form
- Chapter 9.6: Completing the Square
- Chapter 9.7: The Quadratic Formula
- Chapter 9.8: Cubic Functions
Discovering Algebra: An Investigative Approach 2nd Edition - Solutions by Chapter
Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition
Discovering Algebra: An Investigative Approach | 2nd Edition - Solutions by ChapterGet Full Solutions
Tv = Av + Vo = linear transformation plus shift.
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
A = CTC = (L.J]))(L.J]))T for positive definite A.
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].
Set of n nodes connected pairwise by m edges. A complete graph has all n(n - 1)/2 edges between nodes. A tree has only n - 1 edges and no closed loops.
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or email@example.com
Forgot password? Reset it here