 4.3.1: Name the slope and one point on the line that each pointslope equa...
 4.3.2: Write an equation in pointslope form for a line, given its slope a...
 4.3.3: A line passes through the points (2, 1) and (5, 13). a. Find the sl...
 4.3.4: APPLICATION This table shows a linear relationship between actual t...
 4.3.5: Play the BOWLING program at least four times. [ See Calculator Note...
 4.3.6: The graph at right is made up of linear segments a, b, and c. Write...
 4.3.7: A quadrilateral is a polygon with four sides. Quadrilateral ABCD is...
 4.3.8: APPLICATION The table shows postal rates for firstclass U.S. mail ...
 4.3.9: APPLICATION The table below shows fat grams and calories for some b...
 4.3.10: APPLICATION This table shows the amount of trash produced in the Un...
 4.3.12: Find the slope of the line through the first two points given. Assu...
 4.3.13: Write the equation represented by this balance. Then solve the equa...
Solutions for Chapter 4.3: PointSlope Form of a Linear Equation
Full solutions for Discovering Algebra: An Investigative Approach  2nd Edition
ISBN: 9781559537636
Solutions for Chapter 4.3: PointSlope Form of a Linear Equation
Get Full SolutionsThis textbook survival guide was created for the textbook: Discovering Algebra: An Investigative Approach, edition: 2. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 4.3: PointSlope Form of a Linear Equation includes 12 full stepbystep solutions. Discovering Algebra: An Investigative Approach was written by and is associated to the ISBN: 9781559537636. Since 12 problems in chapter 4.3: PointSlope Form of a Linear Equation have been answered, more than 5488 students have viewed full stepbystep solutions from this chapter.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Singular matrix A.
A square matrix that has no inverse: det(A) = o.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.