- 9.1.1: Use a graph and table to approximate solutions for each equation, t...
- 9.1.2: Classify each number by specifying all of the number sets of which ...
- 9.1.3: Use a symbolic method to solve each equation. Show each solution ex...
- 9.1.4: Sketch the graph of a quadratic function with a. One x-intercept. b...
- 9.1.5: A baseball is dropped from the top of a very tall building. The bal...
- 9.1.6: APPLICATION A flare is fired into the air from the ground. It reach...
- 9.1.7: The path of a ball in flight is given by p(x) = 0.23(x 3.4)2 + 4.2,...
- 9.1.8: Solve the equation 4 = 2(x 3)2 + 4 using a. A graph. b. A table. c....
- 9.1.9: APPLICATION The graph at right shows the graph of h(t) = 4.9t2 + 50...
- 9.1.10: Mini-Investigation In each equation the variable x represents time ...
- 9.1.11: Show a step-by-step symbolic solution of the inequality 3x + 4 > 16.
- 9.1.12: The solid line in the graph passes through (0, 6) and (6, 1). Write...
Solutions for Chapter 9.1: Solving Quadratic Equations
Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
A symmetric matrix with eigenvalues of both signs (+ and - ).
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Special solutions to As = O.
One free variable is Si = 1, other free variables = o.
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.