- 9.1.1: Use a graph and table to approximate solutions for each equation, t...
- 9.1.2: Classify each number by specifying all of the number sets of which ...
- 9.1.3: Use a symbolic method to solve each equation. Show each solution ex...
- 9.1.4: Sketch the graph of a quadratic function with a. One x-intercept. b...
- 9.1.5: A baseball is dropped from the top of a very tall building. The bal...
- 9.1.6: APPLICATION A flare is fired into the air from the ground. It reach...
- 9.1.7: The path of a ball in flight is given by p(x) = 0.23(x 3.4)2 + 4.2,...
- 9.1.8: Solve the equation 4 = 2(x 3)2 + 4 using a. A graph. b. A table. c....
- 9.1.9: APPLICATION The graph at right shows the graph of h(t) = 4.9t2 + 50...
- 9.1.10: Mini-Investigation In each equation the variable x represents time ...
- 9.1.11: Show a step-by-step symbolic solution of the inequality 3x + 4 > 16.
- 9.1.12: The solid line in the graph passes through (0, 6) and (6, 1). Write...
Solutions for Chapter 9.1: Solving Quadratic Equations
Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Free variable Xi.
Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here