 26.26.1: We list several pairs of functions f and g. For each pair, please d...
 26.26.2: Consider functions f and g. Prove that f D g (as sets) if and only ...
 26.26.3: Let A and B be sets. Prove that A D B if and only if idA D idB.
 26.26.4: What is the difference between the identity function defined on a s...
 26.26.5: Complete the proof of Proposition 26.8.
 26.26.6: Prove Proposition 26.9.
 26.26.7: Suppose A and B are sets, and f and g are functions with f W A ! B ...
 26.26.8: Suppose f W A ! B is a bijection. Explain why the following are inc...
 26.26.9: Suppose A, B, and C are sets and f W A ! B and g W B ! C. Prove the...
 26.26.10: Find a pair of functions f and g, from set A to itself, such that f...
 26.26.11: Let A be a set and f a function with f W A ! A. a. Suppose f is one...
 26.26.12: Suppose f W A ! A and g W A ! A are both bijections. a. Prove or di...
 26.26.13: Let A be a set and let f W A ! A. Then f f is also a function from ...
 26.26.14: For each of the following sequences, find a formula for the nth it...
Solutions for Chapter 26: Composition
Full solutions for Mathematics: A Discrete Introduction  3rd Edition
ISBN: 9780840049421
Solutions for Chapter 26: Composition
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Chapter 26: Composition includes 14 full stepbystep solutions. This textbook survival guide was created for the textbook: Mathematics: A Discrete Introduction, edition: 3. Mathematics: A Discrete Introduction was written by and is associated to the ISBN: 9780840049421. Since 14 problems in chapter 26: Composition have been answered, more than 9328 students have viewed full stepbystep solutions from this chapter.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A  AI) if no eigenvalues are repeated; always meA) divides peA).

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(DÂ» O.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.