- 10.5.1: Construct the vertex matrix for each of the directed graphs illustr...
- 10.5.2: Draw a diagram of the directed graph corresponding to each of the f...
- 10.5.3: Let M be the following vertex matrix of a directed graph: 0111 1000...
- 10.5.4: (a) Compute the matrix product MTM for the vertex matrix M in Examp...
- 10.5.5: By inspection, locate all cliques in each of the directed graphs il...
- 10.5.6: For each of the following vertex matrices, use Theorem 10.5.2 to fi...
- 10.5.7: For the dominance-directed graph illustrated in Figure Ex-7 constru...
- 10.5.8: Five baseball teams play each other one time with the following res...
- 10.5.T1: A graph having n vertices such that every vertex is connected to ev...
- 10.5.T2: Consider a round-robin tournament among n players (labeled a1, a2, ...
Solutions for Chapter 10.5: GraphTheory
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version | 11th Edition
Tv = Av + Vo = linear transformation plus shift.
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Outer product uv T
= column times row = rank one matrix.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).