 9.2.1E: List the triples in the relation {(a, b, c)}  a, b, and c are inte...
 9.2.2E: Which 4tuples are in the relation {a, b, c, d}  a, b, c, and d ar...
 9.2.3E: List the 5tuples in the relation in Table 8.
 9.2.4E: Assuming that no new ntuples are added, find all the primary keys ...
 9.2.5E: Assuming that no new ntuples are added, find a composite key with ...
 9.2.7E: The 3tuples in a 3ary relation represent the following attributes...
 9.2.9E: The 5tuples in a 5ary relation represent these attributes of all ...
 9.2.11E: What do you obtain when you apply the selection operator sC, where ...
 9.2.10E: What do you obtain when you apply the selection operator sC, where ...
 9.2.12E: What do you obtain when you apply the selection operator sC, where ...
 9.2.13E: What do you obtain when you apply the selection operator sC, where ...
 9.2.15E: Which projection mapping is used to delete the first, second, and f...
 9.2.14E: What do you obtain when you apply the projection P2,3,5 to the 5tu...
 9.2.16E: Display the table produced by applying the projection Pl,2,4 to Tab...
 9.2.17E: Display the table produced by applying the projection P1,4 to Table 8.
 9.2.18E: How many components are there in the ntuples in the table obtained...
 9.2.19E: Construct the table obtained by applying the join operator J2 to th...
 9.2.20E: Show that if C1 and C2 are conditions that elements of the nary re...
 9.2.21E: Show that if C1 and C2 are conditions that elements of the nary re...
 9.2.22E: Show that if C is a condition that elements of the nary relations ...
 9.2.23E: Show that if C is a condition that elements of the nary relations ...
 9.2.24E: Show that if C is a condition that elements of the nary relations ...
 9.2.25E: Show that if R and S are both nary relations, then
 9.2.26E: Give an example to show that if R and S are both nary relations, t...
 9.2.27E: Give an example to show that if R and S are both nary relations, t...
 9.2.28E: a) What are the operations that correspond to the query expressed u...
 9.2.29E: a) What are the operations that correspond to the query expressed u...
 9.2.30E: Show that an nary relation with a primary key can be thought of as...
Solutions for Chapter 9.2: Discrete Mathematics and Its Applications 7th Edition
Full solutions for Discrete Mathematics and Its Applications  7th Edition
ISBN: 9780073383095
Solutions for Chapter 9.2
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Discrete Mathematics and Its Applications, edition: 7. Chapter 9.2 includes 28 full stepbystep solutions. Discrete Mathematics and Its Applications was written by and is associated to the ISBN: 9780073383095. Since 28 problems in chapter 9.2 have been answered, more than 197731 students have viewed full stepbystep solutions from this chapter.

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn  l . Cx = convolution c * x. Eigenvectors in F.

Column space C (A) =
space of all combinations of the columns of A.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.