×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide

Solutions for Chapter 5.4: Strong Mathematical Induction and the Well-Ordering Principle for the Integers

Discrete Mathematics with Applications | 4th Edition | ISBN: 9780495391326 | Authors: Susanna S. Epp

Full solutions for Discrete Mathematics with Applications | 4th Edition

ISBN: 9780495391326

Discrete Mathematics with Applications | 4th Edition | ISBN: 9780495391326 | Authors: Susanna S. Epp

Solutions for Chapter 5.4: Strong Mathematical Induction and the Well-Ordering Principle for the Integers

Solutions for Chapter 5.4
4 5 0 415 Reviews
28
1
Textbook: Discrete Mathematics with Applications
Edition: 4
Author: Susanna S. Epp
ISBN: 9780495391326

This textbook survival guide was created for the textbook: Discrete Mathematics with Applications , edition: 4. Discrete Mathematics with Applications was written by and is associated to the ISBN: 9780495391326. Since 32 problems in chapter 5.4: Strong Mathematical Induction and the Well-Ordering Principle for the Integers have been answered, more than 45481 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 5.4: Strong Mathematical Induction and the Well-Ordering Principle for the Integers includes 32 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
  • Condition number

    cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

  • Conjugate Gradient Method.

    A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.

  • Cross product u xv in R3:

    Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

  • Diagonal matrix D.

    dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.

  • Free variable Xi.

    Column i has no pivot in elimination. We can give the n - r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

  • Incidence matrix of a directed graph.

    The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .

  • Indefinite matrix.

    A symmetric matrix with eigenvalues of both signs (+ and - ).

  • Krylov subspace Kj(A, b).

    The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

  • Linear transformation T.

    Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

  • Norm

    IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

  • Nullspace matrix N.

    The columns of N are the n - r special solutions to As = O.

  • Pascal matrix

    Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

  • Pivot columns of A.

    Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

  • Pivot.

    The diagonal entry (first nonzero) at the time when a row is used in elimination.

  • Projection matrix P onto subspace S.

    Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.

  • Saddle point of I(x}, ... ,xn ).

    A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

  • Skew-symmetric matrix K.

    The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

  • Sum V + W of subs paces.

    Space of all (v in V) + (w in W). Direct sum: V n W = to}.

  • Triangle inequality II u + v II < II u II + II v II.

    For matrix norms II A + B II < II A II + II B II·

  • Tridiagonal matrix T: tij = 0 if Ii - j I > 1.

    T- 1 has rank 1 above and below diagonal.

×
Log in to StudySoup
Get Full Access to Math - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Math - Textbook Survival Guide
×
Reset your password