- Chapter 1: Speaking Mathematically
- Chapter 10: Graphs and Trees
- Chapter 2: THE LOGIC OF COMPOUND STATEMENTS
- Chapter 3: The Logic of Quantied Statements
- Chapter 4: Elementary Number Theory and Methods of Proof
- Chapter 5: Sequences, Mathematical Induction, and Recursion
- Chapter 6: Set Theory
- Chapter 7: Functions
- Chapter 8: Relations
- Chapter 9: Counting and Probability
Discrete Mathematics: Introduction to Mathematical Reasoning 1st Edition - Solutions by Chapter
Full solutions for Discrete Mathematics: Introduction to Mathematical Reasoning | 1st Edition
Discrete Mathematics: Introduction to Mathematical Reasoning | 1st Edition - Solutions by ChapterGet Full Solutions
Tv = Av + Vo = linear transformation plus shift.
peA) = det(A - AI) has peA) = zero matrix.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Gram-Schmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
Every v in V is orthogonal to every w in W.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Outer product uv T
= column times row = rank one matrix.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Constant down each diagonal = time-invariant (shift-invariant) filter.
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.
Don't have a StudySoup account? Create one here!
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here