- 21.21.1: What is the smallest positive real number?
- 21.21.2: Prove by the techniques of this section that 1 C 2 C 3 C C n D 1 2 ...
- 21.21.3: Prove by the techniques of this section that n < 2n for all n 2 N.
- 21.21.4: Prove by the techniques of this section that n n n for all positive...
- 21.21.5: Prove by the techniques of this section that 2n n 4 n for all natur...
- 21.21.6: Recall Proposition 13.2 that for all positive integers n we have 1 ...
- 21.21.7: The inequality Fn > 1:6n is true once n is big enough. Do some calc...
- 21.21.8: Calculate the sum of the first n Fibonacci numbers for n D 0; 1; 2;...
- 21.21.9: . Criticize the following statement and proof: Statement. All natur...
- 21.21.10: In Section 17 we discussed that Pascals triangle and the triangle o...
- 21.21.11: Prove the generalized Addition Principle by use of the Well-Orderin...
Solutions for Chapter 21: Smallest Counterexample
Full solutions for Mathematics: A Discrete Introduction | 3rd Edition
Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
A sequence of steps intended to approach the desired solution.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).