 10.7.1E: Find all possible spanning trees for each of the graphs in 1 and 2.
 10.7.2E: Find all possible spanning trees for each of the graphs in 1 and 2.
 10.7.3E: Find a spanning tree for each of the graphs in 3 and 4.
 10.7.4E: Find a spanning tree for each of the graphs in 3 and 4.
 10.7.5E: Use Kruskal’s algorithm to find a minimum spanning tree for each of...
 10.7.6E: Use Kruskal’s algorithm to find a minimum spanning tree for each of...
 10.7.7E: Use Prim’s algorithm starting with vertex a or v0 to find a minimum...
 10.7.8E: Use Prim’s algorithm starting with vertex a or v0 to find a minimum...
 10.7.9E: For each of the graphs in 9 and 10, find all minimum spanning trees...
 10.7.10E: For each of the graphs in 9 and 10, find all minimum spanning trees...
 10.7.11E: A pipeline is to be built that will link six cities. The cost (in h...
 10.7.12E: Use Dijkstra’s algorithm for the airline route system of Figure 10....
 10.7.13E: Use Dijkstra’s algorithm to find the shortest path from a to z for ...
 10.7.17E: Prove part (2) of Proposition 10.7.1: Any two spanning trees for a ...
 10.7.18E: Given any two distinct vertices of a tree, there exists a unique pa...
 10.7.19E: Prove that if G is a graph with spanning tree T and e is an edge of...
 10.7.20E: Suppose G is a connected graph and T is a circuitfree subgraph of ...
 10.7.21E: a. Suppose T1 and T2 are two different spanning trees for a graph G...
 10.7.22E: Prove that an edge e is contained in every spanning tree for a conn...
 10.7.23E: Consider the spanning trees T1 and T2 in the proof of Theorem 10.7....
 10.7.24E: Suppose that T is a minimum spanning tree for a connected, weighted...
 10.7.25E: Prove that if G is a connected, weighted graph and e is an edge of ...
 10.7.26E: If G is a connected, weighted graph and no two edges of G have the ...
 10.7.27E: Prove that if G is a connected, weighted graph and e is an edge of ...
 10.7.28E: Suppose a disconnected graph is input to Kruskal’s algorithm. What ...
 10.7.29E: Suppose a disconnected graph is input to Prim’s algorithm. What wil...
 10.7.30E: Prove that if a connected, weighted graph G is input to Algorithm 1...
Solutions for Chapter 10.7: Discrete Mathematics with Applications 4th Edition
Full solutions for Discrete Mathematics with Applications  4th Edition
ISBN: 9780495391326
Solutions for Chapter 10.7
Get Full SolutionsDiscrete Mathematics with Applications was written by and is associated to the ISBN: 9780495391326. Since 27 problems in chapter 10.7 have been answered, more than 52796 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 10.7 includes 27 full stepbystep solutions. This textbook survival guide was created for the textbook: Discrete Mathematics with Applications , edition: 4.

Column space C (A) =
space of all combinations of the columns of A.

Companion matrix.
Put CI, ... ,Cn in row n and put n  1 ones just above the main diagonal. Then det(A  AI) = ±(CI + c2A + C3A 2 + .•. + cnA nl  An).

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Iterative method.
A sequence of steps intended to approach the desired solution.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Orthogonal subspaces.
Every v in V is orthogonal to every w in W.

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.