 2.6.1E: Let A = a) What size is A?________________b) What is the third colu...
 2.6.2E: Find A + B. where ________________
 2.6.3E: Find AB if ________________ ________________
 2.6.4E: Find the product AB. where ________________ ________________
 2.6.5E: Find a matrix A such that [Hint: Finding A requires that you solve ...
 2.6.6E: Find a matrix A such that
 2.6.7E: Let A be an m × n matrix and let 0 be the m × n matrix that has all...
 2.6.8E: Show that matrix addition is commutative; that is, show that if A a...
 2.6.9E: Show that matrix addition is associative: that is, show that if A, ...
 2.6.10E: Let A be a 3 × 4 matrix, B be a 4 × 5 matrix, and C be a 4×4 matrix...
 2.6.11E: What do we know about the sizes of the matrices A and B if both of ...
 2.6.12E: In this exercise we show that matrix multiplication is distributive...
 2.6.13E: In this exercise we show that matrix multiplication is associative....
 2.6.14E: The n × n matrix A =[aij] is called a diagonal matrix i aij = 0 whe...
 2.6.15E: Let Find a formula for An , whenever n is a positive integer.
 2.6.16E: Show that (At)t = A.
 2.6.17E: Let A and B be two n × n matrices. Show thata) (A + B)t = At +Bt.__...
 2.6.18E:
 2.6.19E:
 2.6.20E: Let a) Find A?1. [Hint: Use Exercise 19.]________________b) Find A3...
 2.6.21E: Let A be an invertible matrix. Show that (An)?l = (A?1) n whenever ...
 2.6.22E: Let A be a matrix. Show that the matrix AAt is symmetric. Hint: Sh...
 2.6.23E: Suppose that A is an n × n matrix where n is a positive integer. Sh...
 2.6.24E: a) Show that the system of simultaneous linear equation in the vari...
 2.6.25E: Use Exercises 18 and 24 to solve the system
 2.6.26E: Let Find ________________ ________________
 2.6.27E: Let Find ________________ ________________
 2.6.28E: Find the Boolean product of A and B. where
 2.6.29E: Let Find ________________ ________________
 2.6.30E: Let A be a zero–one matrix. Show thata) A ? A = A. ________________...
 2.6.31E: In this exercise we show that the meet and join operations are comm...
 2.6.32E: In this exercise we show that the meet and join operations are asso...
 2.6.33E: We will establish distributive laws of the meet over the join opera...
 2.6.34E: Let A be an n × n zero–one matrix. Let I be the n × n identity matr...
 2.6.35E: In this exercise we will show that the Boolean product of zero–one ...
Solutions for Chapter 2.6: Discrete Mathematics and Its Applications 7th Edition
Full solutions for Discrete Mathematics and Its Applications  7th Edition
ISBN: 9780073383095
Solutions for Chapter 2.6
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Discrete Mathematics and Its Applications was written by and is associated to the ISBN: 9780073383095. This textbook survival guide was created for the textbook: Discrete Mathematics and Its Applications, edition: 7. Chapter 2.6 includes 35 full stepbystep solutions. Since 35 problems in chapter 2.6 have been answered, more than 151964 students have viewed full stepbystep solutions from this chapter.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Iterative method.
A sequence of steps intended to approach the desired solution.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.

Toeplitz matrix.
Constant down each diagonal = timeinvariant (shiftinvariant) filter.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.