 11.3.20E: Construct a table showing the result of each step when insertion so...
 11.3.1E: Suppose a computer takes 1 nanosecond (= 10?9 second) to execute ea...
 11.3.2E: Suppose an algorithm requires cn2 operations when performed with an...
 11.3.3E: Suppose an algorithm requires cn3 operations when performed with an...
 11.3.4E: Exercises 4–5 explore the fact that for relatively small values of ...
 11.3.5E: Exercises 4–5 explore the fact that for relatively small values of ...
 11.3.6E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.7E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.8E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.9E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.10E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.11E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.12E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.13E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.14E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.15E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.16E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.17E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.18E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.21E: Construct a table showing the result of each step when insertion so...
 11.3.22E: Construct a trace table showing the action of insertion sort on the...
 11.3.24E: How many comparisons between values of a[ j ] and x actually occur ...
 11.3.27E: Consider the recurrence relation that arose in Example 11.3.7: E1 =...
 11.3.28E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.29E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.30E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.35E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.39E: Exercises 36–39 refer to the following algorithm to compute the val...
Solutions for Chapter 11.3: Discrete Mathematics with Applications 4th Edition
Full solutions for Discrete Mathematics with Applications  4th Edition
ISBN: 9780495391326
Solutions for Chapter 11.3
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Discrete Mathematics with Applications , edition: 4th. Discrete Mathematics with Applications was written by and is associated to the ISBN: 9780495391326. Since 28 problems in chapter 11.3 have been answered, more than 28060 students have viewed full stepbystep solutions from this chapter. Chapter 11.3 includes 28 full stepbystep solutions.

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.