- Chapter 0:
- Chapter 1:
- Chapter 10:
- Chapter 11:
- Chapter 12:
- Chapter 13:
- Chapter 14:
- Chapter 15:
- Chapter 16:
- Chapter 17:
- Chapter 18:
- Chapter 19:
- Chapter 2:
- Chapter 20:
- Chapter 21:
- Chapter 22:
- Chapter 23:
- Chapter 24:
- Chapter 25:
- Chapter 26:
- Chapter 27:
- Chapter 28:
- Chapter 29:
- Chapter 3:
- Chapter 30:
- Chapter 31:
- Chapter 32:
- Chapter 33:
- Chapter 4:
- Chapter 5:
- Chapter 6:
- Chapter 7:
- Chapter 8:
- Chapter 9:
Contemporary Abstract Algebra 8th Edition - Solutions by Chapter
Full solutions for Contemporary Abstract Algebra | 8th Edition
Remove row i and column j; multiply the determinant by (-I)i + j •
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
= Xl (column 1) + ... + xn(column n) = combination of columns.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Every v in V is orthogonal to every w in W.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.