 11.3.20E: Construct a table showing the result of each step when insertion so...
 11.3.1E: Suppose a computer takes 1 nanosecond (= 10?9 second) to execute ea...
 11.3.2E: Suppose an algorithm requires cn2 operations when performed with an...
 11.3.3E: Suppose an algorithm requires cn3 operations when performed with an...
 11.3.4E: Exercises 4–5 explore the fact that for relatively small values of ...
 11.3.5E: Exercises 4–5 explore the fact that for relatively small values of ...
 11.3.6E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.7E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.8E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.9E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.10E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.11E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.12E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.13E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.14E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.15E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.16E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.17E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.18E: For each of the algorithm segments in 6–19, assume that n is a posi...
 11.3.21E: Construct a table showing the result of each step when insertion so...
 11.3.22E: Construct a trace table showing the action of insertion sort on the...
 11.3.24E: How many comparisons between values of a[ j ] and x actually occur ...
 11.3.27E: Consider the recurrence relation that arose in Example 11.3.7: E1 =...
 11.3.28E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.29E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.30E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.35E: Exercises 28–35 refer to selection sort, which is another algorithm...
 11.3.39E: Exercises 36–39 refer to the following algorithm to compute the val...
Solutions for Chapter 11.3: Discrete Mathematics with Applications 4th Edition
Full solutions for Discrete Mathematics with Applications  4th Edition
ISBN: 9780495391326
Solutions for Chapter 11.3
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Discrete Mathematics with Applications , edition: 4. Discrete Mathematics with Applications was written by and is associated to the ISBN: 9780495391326. Since 28 problems in chapter 11.3 have been answered, more than 56870 students have viewed full stepbystep solutions from this chapter. Chapter 11.3 includes 28 full stepbystep solutions.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or  sign.

Companion matrix.
Put CI, ... ,Cn in row n and put n  1 ones just above the main diagonal. Then det(A  AI) = ±(CI + c2A + C3A 2 + .•. + cnA nl  An).

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Iterative method.
A sequence of steps intended to approach the desired solution.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.

Vector addition.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.