 6.4.1: Is the following code a prefix code? Why or why not? Character m b ...
 6.4.2: Using the code of Exercise 1, decode the string 01101.
 6.4.3: Given the codes Character a e i o u Encoding scheme 00 01 10 110 11...
 6.4.4: Given the codes Character b h q w % Encoding scheme 1000 1001 0 11 ...
 6.4.5: Given the codes Character a p w ( ) Encoding scheme 001 1010 110 11...
 6.4.6: Given the nonprefix codes Character 1 3 5 7 9 Encoding scheme 1 111...
 6.4.7: Write the Huffman codes for a, b, c, and d in the binary tree shown
 6.4.8: Write the Huffman codes for r, s, t, u in the binary tree shown.
 6.4.9: a. Construct the Huffman tree for the following characters and freq...
 6.4.10: a. Construct the Huffman tree for the following characters and freq...
 6.4.11: a. Construct the Huffman tree for the following characters and freq...
 6.4.12: a. Construct the Huffman tree for the following characters and freq...
 6.4.13: Construct the Huffman tree and find the Huffman codes for the follo...
 6.4.14: Construct the Huffman tree and find the Huffman codes for the follo...
 6.4.15: JPEG can achieve various compression levels; the higher the compres...
 6.4.16: Explain why JPEG encoding results in less compression for grayscal...
 6.4.17: Someone does a global substitution on the text file of Exercise 11,...
 6.4.18: Consider the following paragraph. However, in my thoughts I could n...
 6.4.19: Recall the problem posed at the beginning of this chapter. You work...
 6.4.20: In the justification that the Huffman algorithm produces an optimal...
Solutions for Chapter 6.4: Huffman Codes
Full solutions for Mathematical Structures for Computer Science  7th Edition
ISBN: 9781429215107
Solutions for Chapter 6.4: Huffman Codes
Get Full SolutionsChapter 6.4: Huffman Codes includes 20 full stepbystep solutions. Since 20 problems in chapter 6.4: Huffman Codes have been answered, more than 9615 students have viewed full stepbystep solutions from this chapter. Mathematical Structures for Computer Science was written by and is associated to the ISBN: 9781429215107. This textbook survival guide was created for the textbook: Mathematical Structures for Computer Science, edition: 7. This expansive textbook survival guide covers the following chapters and their solutions.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Indefinite matrix.
A symmetric matrix with eigenvalues of both signs (+ and  ).

Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Schur complement S, D  C A } B.
Appears in block elimination on [~ g ].

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.