- 4.3.1: Name the slope and one point on the line that each point-slope equa...
- 4.3.2: Write an equation in point-slope form for a line, given its slope a...
- 4.3.3: A line passes through the points (2, 1) and (5, 13). a. Find the sl...
- 4.3.4: APPLICATION This table shows a linear relationship between actual t...
- 4.3.5: Play the BOWLING program at least four times. [ See Calculator Note...
- 4.3.6: The graph at right is made up of linear segments a, b, and c. Write...
- 4.3.7: A quadrilateral is a polygon with four sides. Quadrilateral ABCD is...
- 4.3.8: APPLICATION The table shows postal rates for first-class U.S. mail ...
- 4.3.9: APPLICATION The table below shows fat grams and calories for some b...
- 4.3.10: APPLICATION This table shows the amount of trash produced in the Un...
- 4.3.12: Find the slope of the line through the first two points given. Assu...
- 4.3.13: Write the equation represented by this balance. Then solve the equa...
Solutions for Chapter 4.3: Point-Slope Form of a Linear Equation
Full solutions for Discovering Algebra: An Investigative Approach | 2nd Edition
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Invert A by row operations on [A I] to reach [I A-I].
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or -1) based on the number of row exchanges to reach I.
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!
Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.