- 9.2.1: Let f be a homomorphism from a group 3G, # 4 to a group 3H, +4. Pro...
- 9.2.2: Let f be a homorphism from the group 3Z, +4 to the group 3Z, +4 giv...
- 9.2.3: The function f defined by f(x) = x # 8 2 is a homomorphism from 3Z,...
- 9.2.4: The function f defined by f(x) = x # 8 4 is a homomorphism from 3Z1...
- 9.2.5: A function f: Z Z S Z is defined by f(x, y) = x + y. a. Prove that ...
- 9.2.6: Let F be the set of all functions f: R S R. For f, g [ F, let f + g...
- 9.2.7: Let S = 50, 4, 86. Then 3S, +12 4 is a subgroup of 3Z12, +12 4. Fin...
- 9.2.8: Let S = 5i,(2, 3)6. Then 3S, +4 is a subgroup of the symmetric grou...
- 9.2.9: Consider the canonical parity-check matrix h = F 1 1 1 0 1 1 1 0 1 ...
- 9.2.10: Consider the canonical parity-check matrix h = I 1 1 0 1 0 1 1 1 0 ...
- 9.2.11: Give an example of a canonical parity-check matrix that will genera...
- 9.2.12: Give a canonical parity-check matrix for a single-error correcting ...
- 9.2.13: Let H be an n r binary matrix mapping :Zn 2, +2; to :Zr 2, +2; by t...
- 9.2.14: Which of the following are perfect codes? Which are single-error co...
- 9.2.15: Complete the coset leader/syndrome table of Example 28.
- 9.2.16: Use your table from Exercise 15 and the matrix H from Example 28 to...
- 9.2.17: For Exercises 17 and 18, use a canonical parity-check matrix for th...
- 9.2.18: For Exercises 17 and 18, use a canonical parity-check matrix for th...
Solutions for Chapter 9.2: Coding Theory
Full solutions for Mathematical Structures for Computer Science | 7th Edition
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Identity matrix I (or In).
Diagonal entries = 1, off-diagonal entries = 0.
A symmetric matrix with eigenvalues of both signs (+ and - ).
Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Constant down each diagonal = time-invariant (shift-invariant) filter.
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).