 8.R.1RQ: a) What is a recurrence relation?________________b) Find a recurren...
 8.R.2RQ: Explain how the Fibonacci numbers are used to solve Fibonacci’s pro...
 8.R.3RQ: a) Find a recurrence relation for the number of steps needed to sol...
 8.R.7RQ: a) Explain how to solve linear homogeneous recurrence relations of ...
 8.R.5RQ: a) What is dynamic programming and how are recurrence relations use...
 8.R.6RQ: Define a linear homogeneous recurrence relation of degree k.
 8.R.4RQ: a) Explain how to find a recurrence relation for the number of bit ...
 8.R.9RQ: a) Derive a divideandconquer recurrence relation for the number o...
 8.R.10RQ: a) Give a formula for the number of elements in the union of three ...
 8.R.8RQ: a) Explain how to find f (bk) where k is a positive integer if f(n)...
 8.R.12RQ: a) State the principle of inclusionexclusion.________________b) Ou...
 8.R.11RQ: a) Give a formula for the number of elements in the union of four s...
 8.R.13RQ: Explain how the principle of inclusion—exclusion can be used to cou...
 8.R.15RQ: Explain how the inclusionexclusion principle can be used to count ...
 8.R.16RQ: a) Define a derangement.________________b) Why is counting the numb...
 8.R.14RQ: a) How can you count the number of ways to assign m jobs to n emplo...
Solutions for Chapter 8.R: Discrete Mathematics and Its Applications 7th Edition
Full solutions for Discrete Mathematics and Its Applications  7th Edition
ISBN: 9780073383095
Solutions for Chapter 8.R
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Discrete Mathematics and Its Applications, edition: 7. Discrete Mathematics and Its Applications was written by and is associated to the ISBN: 9780073383095. Chapter 8.R includes 16 full stepbystep solutions. Since 16 problems in chapter 8.R have been answered, more than 198926 students have viewed full stepbystep solutions from this chapter.

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Iterative method.
A sequence of steps intended to approach the desired solution.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Nullspace matrix N.
The columns of N are the n  r special solutions to As = O.

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).