 11.5.2E: Prim's algorithm to find a min imumspanning tree for the given weig...
 11.5.1E: The roads represented by this graph are all unpaved. The lengths of...
 11.5.4E: Prim's algorithm to find a min imumspanning tree for the given weig...
 11.5.3E: Prim's algorithm to find a min imumspanning tree for the given weig...
 11.5.5E: Use Kruskal’s algorithm to design the communications network descri...
 11.5.6E: Use Kruskal’s algorithm to find a minimum spanning tree for the wei...
 11.5.7E: Use Kruskal’s algorithm to find a minimum spanning tree for the wei...
 11.5.8E: Use Kruskal’s algorithm to find a minimum spanning tree for the wei...
 11.5.9E: Find a connected weighted simple graph with the fewest edges possib...
 11.5.10E: A minimum spanning forest in a weighted graph is a spanning forest ...
 11.5.11E: Devise an algorithm similar to Prim’s algorithm for constructing a ...
 11.5.13E: Find a maximum spanning tree for the weighted graph in Exercise 2.
 11.5.12E: Devise an algorithm similar to Kruskal’s algorithm for constructing...
 11.5.14E: Find a maximum spanning tree for the weighted graph in Exercise 3.
 11.5.15E: Find a maximum spanning tree for the weighted graph in Exercise 4.
 11.5.20E: Suppose that the computer network connecting the cities in Figure 1...
 11.5.18E: Show that an edge with smallest weight in a connected weighted grap...
 11.5.19E: Show that there is a unique minimum spanning tree in a connected we...
 11.5.16E: Find the second least expensive communications network connecting t...
 11.5.17E: Devise an algorithm for finding the second shortest spanning tree i...
 11.5.22E: Describe an algorithm for finding a spanning tree with minimal weig...
 11.5.23E: Express the algorithm devised in Exercise 22 in pseudocode.Sollin’s...
 11.5.24E: Show that the addition of edges at each stage of Sollin’s algorithm...
 11.5.21E: Find a spanning tree with minimal total weight containing the edges...
 11.5.25E: Sollin’s algorithm to produce a minimum spanning tree for the weigh...
 11.5.26E: Express Sollin’s algorithm in pseudocode.
 11.5.28E: Show that the first step of Sollin’s algorithm produces a forest co...
 11.5.27E: Prove that Sollin’s algorithm produces a minimum spanning tree in a...
 11.5.29E: Show that if there are r trees in the forest at some intermediate s...
 11.5.30E: Show that when given as input an undirected graph with n vertices, ...
 11.5.32E: Prove that Kruskal’s algorithm produces minimum spanning trees.
 11.5.33E: Show that if G is a weighted graph with distinct edge weights, then...
 11.5.34E: Express the reversedelete algorithm in pseudocode.
 11.5.31E: Show that Sollin’s algorithm requires at most log n iterations to p...
 11.5.35E: Prove that the reversedelete algorithm always produces a minimum s...
Solutions for Chapter 11.5: Discrete Mathematics and Its Applications 7th Edition
Full solutions for Discrete Mathematics and Its Applications  7th Edition
ISBN: 9780073383095
Solutions for Chapter 11.5
Get Full SolutionsThis textbook survival guide was created for the textbook: Discrete Mathematics and Its Applications, edition: 7. Chapter 11.5 includes 35 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Discrete Mathematics and Its Applications was written by and is associated to the ISBN: 9780073383095. Since 35 problems in chapter 11.5 have been answered, more than 224527 students have viewed full stepbystep solutions from this chapter.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A  AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.