 1.6.1: According to the assignment rule, what is the precondition in the f...
 1.6.2: According to the assignment rule, what is the precondition in the f...
 1.6.3: According to the assignment rule, what is the precondition in the f...
 1.6.4: According to the assignment rule, what is the precondition in the f...
 1.6.5: Verify the correctness of the following program segment with the pr...
 1.6.6: Verify the correctness of the following program segment with the pr...
 1.6.7: Verify the correctness of the following program segment with the pr...
 1.6.8: Verify the correctness of the following program segment with the pr...
 1.6.9: Verify the correctness of the following program segment to compute ...
 1.6.10: Verify the correctness of the following program segment to compute ...
 1.6.11: Verify the correctness of the following program segment with the pr...
 1.6.12: Verify the correctness of the following program segment with the pr...
 1.6.13: Verify the correctness of the following program segment with the pr...
 1.6.14: Verify the correctness of the following program segment to compute ...
 1.6.15: Verify the correctness of the following program segment to compute ...
 1.6.16: Verify the correctness of the following program segment with the as...
Solutions for Chapter 1.6: Logic Programming
Full solutions for Mathematical Structures for Computer Science  7th Edition
ISBN: 9781429215107
Solutions for Chapter 1.6: Logic Programming
Get Full SolutionsSince 16 problems in chapter 1.6: Logic Programming have been answered, more than 9675 students have viewed full stepbystep solutions from this chapter. Mathematical Structures for Computer Science was written by and is associated to the ISBN: 9781429215107. Chapter 1.6: Logic Programming includes 16 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Mathematical Structures for Computer Science, edition: 7.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Incidence matrix of a directed graph.
The m by n edgenode incidence matrix has a row for each edge (node i to node j), with entries 1 and 1 in columns i and j .

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A  AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.