 Chapter 1: Fundamentals
 Chapter 1: Joy
 Chapter 10: Partially Ordered Sets
 Chapter 10: Sets I: Introduction, Subsets
 Chapter 11: Sets I: Introduction, Subsets
 Chapter 12: Sets II: Operations
 Chapter 13: Combinatorial Proof: Two Examples
 Chapter 14: Relations
 Chapter 15: Equivalence Relations
 Chapter 16: Partitions
 Chapter 17: Binomial Coefficients
 Chapter 18: Counting Multisets
 Chapter 19: InclusionExclusion
 Chapter 2: Collections
 Chapter 2: Speaking (and Writing) of Mathematics
 Chapter 20: Contradiction
 Chapter 21: Smallest Counterexample
 Chapter 22: Induction
 Chapter 23: Recurrence Relations
 Chapter 24: Functions
 Chapter 25: The Pigeonhole Principle
 Chapter 26: Composition
 Chapter 27: Permutations
 Chapter 28: Symmetry
 Chapter 29: Assorted Notation
 Chapter 3: Counting and Relations
 Chapter 3: Definition
 Chapter 30: Sample Space
 Chapter 31: Events
 Chapter 32: Conditional Probability and Independence
 Chapter 33: Random Variables
 Chapter 34: Expectation
 Chapter 35: Dividing
 Chapter 36: Greatest Common Divisor
 Chapter 37: Modular Arithmetic
 Chapter 38: The Chinese Remainder Theorem
 Chapter 39: Factoring
 Chapter 4: More Proof
 Chapter 4: Theorem
 Chapter 40: Groups
 Chapter 41: Group Isomorphism The Same?
 Chapter 42: Subgroups
 Chapter 43: Fermats Little Theorem
 Chapter 44: Public Key Cryptography I: Introduction The Problem: Private Communication in Public
 Chapter 45: Public Key Cryptography II: Rabins Method
 Chapter 46: Public Key Cryptography III: RSA
 Chapter 47: Fundamentals of Graph Theory
 Chapter 48: Subgraphs
 Chapter 49: Connection
 Chapter 5: Functions
 Chapter 5: Proof
 Chapter 50: Trees
 Chapter 51: Eulerian Graphs
 Chapter 52: Coloring
 Chapter 53: Planar Graphs
 Chapter 54: Fundamentals of Partially Ordered Sets
 Chapter 55: Max and Min
 Chapter 56: Linear Orders
 Chapter 57: Linear Extensions
 Chapter 58: Dimension
 Chapter 59: Lattices
 Chapter 6: Probability
 Chapter 6: Counterexample
 Chapter 7: Number Theory
 Chapter 7: Boolean Algebra
 Chapter 8: Algebra
 Chapter 8: Lists
 Chapter 9: Graphs
 Chapter 9: Factorial
Mathematics: A Discrete Introduction 3rd Edition  Solutions by Chapter
Full solutions for Mathematics: A Discrete Introduction  3rd Edition
ISBN: 9780840049421
Mathematics: A Discrete Introduction  3rd Edition  Solutions by Chapter
Get Full SolutionsSince problems from 69 chapters in Mathematics: A Discrete Introduction have been answered, more than 14506 students have viewed full stepbystep answer. This expansive textbook survival guide covers the following chapters: 69. The full stepbystep solution to problem in Mathematics: A Discrete Introduction were answered by , our top Math solution expert on 03/15/18, 06:06PM. This textbook survival guide was created for the textbook: Mathematics: A Discrete Introduction, edition: 3. Mathematics: A Discrete Introduction was written by and is associated to the ISBN: 9780840049421.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Toeplitz matrix.
Constant down each diagonal = timeinvariant (shiftinvariant) filter.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.