×

×

# Solutions for Chapter 3.3: Discrete Mathematics and Its Applications 7th Edition

## Full solutions for Discrete Mathematics and Its Applications | 7th Edition

ISBN: 9780073383095

Solutions for Chapter 3.3

Solutions for Chapter 3.3
4 5 0 388 Reviews
24
5
##### ISBN: 9780073383095

This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Discrete Mathematics and Its Applications, edition: 7. Discrete Mathematics and Its Applications was written by and is associated to the ISBN: 9780073383095. Since 44 problems in chapter 3.3 have been answered, more than 354511 students have viewed full step-by-step solutions from this chapter. Chapter 3.3 includes 44 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
• Augmented matrix [A b].

Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

• Circulant matrix C.

Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.

• Cofactor Cij.

Remove row i and column j; multiply the determinant by (-I)i + j •

• Complete solution x = x p + Xn to Ax = b.

(Particular x p) + (x n in nullspace).

• Diagonalization

A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.

• Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.

Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

• Free columns of A.

Columns without pivots; these are combinations of earlier columns.

• Gauss-Jordan method.

Invert A by row operations on [A I] to reach [I A-I].

• Hessenberg matrix H.

Triangular matrix with one extra nonzero adjacent diagonal.

• Kronecker product (tensor product) A ® B.

Blocks aij B, eigenvalues Ap(A)Aq(B).

• Krylov subspace Kj(A, b).

The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

• Left nullspace N (AT).

Nullspace of AT = "left nullspace" of A because y T A = OT.

• Matrix multiplication AB.

The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

• Projection p = a(aTblaTa) onto the line through a.

P = aaT laTa has rank l.

• Pseudoinverse A+ (Moore-Penrose inverse).

The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

• Rotation matrix

R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().

• Singular matrix A.

A square matrix that has no inverse: det(A) = o.

• Solvable system Ax = b.

The right side b is in the column space of A.

• Toeplitz matrix.

Constant down each diagonal = time-invariant (shift-invariant) filter.

• Transpose matrix AT.

Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.