- 5.5.1E: Exercises 1–5 contain a while loop and a predicate. In each case sh...
- 5.5.2E: Exercises 1–5 contain a while loop and a predicate. In each case sh...
- 5.5.3E: Exercises 1–5 contain a while loop and a predicate. In each case sh...
- 5.5.4E: Exercises 1–5 contain a while loop and a predicate. In each case sh...
- 5.5.5E: Exercises 1–5 contain a while loop and a predicate. In each case sh...
- 5.5.6E: Exercises 6–9 each contain a while loop annotated with a preand a p...
- 5.5.7E: Exercises 6–9 each contain a while loop annotated with a preand a p...
- 5.5.8E: Exercises 6–9 each contain a while loop annotated with a preand a p...
- 5.5.9E: Exercises 6–9 each contain a while loop annotated with a preand a p...
- 5.5.10E: Prove correctness of the while loop of Algorithm 4.8.3 (in exercise...
- 5.5.11E: The following while loop implements a way to multiply two numbers t...
Solutions for Chapter 5.5: Discrete Mathematics with Applications 4th Edition
Full solutions for Discrete Mathematics with Applications | 4th Edition
Upper triangular systems are solved in reverse order Xn to Xl.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.