 10.3.1: Find real numbers a, b, and c such that the following are true. a. ...
 10.3.2: Find the adjacency matrices for the following directed graphs. a. v...
 10.3.3: Find directed graphs that have the following adjacency matrices:
 10.3.4: Find adjacency matrices for the following (undirected) graphs. v1 v...
 10.3.5: Find graphs that have the following adjacency matrices
 10.3.6: The following are adjacency matrices for graphs. In each case deter...
 10.3.7: Suppose that for all positive integers i, all the entries in the it...
 10.3.8: Find each of the following product
 10.3.9: Find each of the following products.
 10.3.10: Let A = * 1 1 1 0 2 1+ , B = * 2 0 1 3+ , and C = 0 2 3 1 1 0 . For...
 10.3.11: Give an example different from that in the text to show that matrix...
 10.3.12: Let O denote the matrix * 0 0 0 0+ . Find 2 2 matrices A and B such...
 10.3.13: Let O denote the matrix * 0 0 0 0+ . Find 2 2 matrices A and B such...
 10.3.14: In 1418 assume the entries of all matrices are real numbers.
 10.3.15: In 1418 assume the entries of all matrices are real numbers.
 10.3.16: In 1418 assume the entries of all matrices are real numbers.
 10.3.17: In 1418 assume the entries of all matrices are real numbers.
 10.3.18: In 1418 assume the entries of all matrices are real numbers.
 10.3.19: a. Let A = 112 101 210 . Find A2 and A3 . b. Let G be the graph wit...
 10.3.20: The following is an adjacency matrix for a graph: v1 v2 v3 v4 v1 01...
 10.3.21: Let A be the adjacent matrix for K3, the complete graph on three ve...
 10.3.22: a. Draw a graph that has 00012 00011 00021 11200 21100 as its adjac...
 10.3.23: a. Let G be a graph with n vertices, and let v and w be distinct ve...
Solutions for Chapter 10.3: Matrix Representations of Graphs
Full solutions for Discrete Mathematics with Applications  4th Edition
ISBN: 9780495391326
Solutions for Chapter 10.3: Matrix Representations of Graphs
Get Full SolutionsDiscrete Mathematics with Applications was written by Sieva Kozinsky and is associated to the ISBN: 9780495391326. This textbook survival guide was created for the textbook: Discrete Mathematics with Applications , edition: 4th. Chapter 10.3: Matrix Representations of Graphs includes 23 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 23 problems in chapter 10.3: Matrix Representations of Graphs have been answered, more than 24658 students have viewed full stepbystep solutions from this chapter.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.