 9.1.1: A measurable characteristic is called a(n) _____.
 9.1.2: . In a(n) _____ _____ _____ every member of the population has an e...
 9.1.3: True or False A discrete variable can assume any real value between...
 9.1.4: If, in a sample, a segment of the population is overrepresented or ...
 9.1.5: In 516, identify the variable in each experiment and determine whet...
 9.1.6: In 516, identify the variable in each experiment and determine whet...
 9.1.7: In 516, identify the variable in each experiment and determine whet...
 9.1.8: In 516, identify the variable in each experiment and determine whet...
 9.1.9: In 516, identify the variable in each experiment and determine whet...
 9.1.10: In 516, identify the variable in each experiment and determine whet...
 9.1.11: In 516, identify the variable in each experiment and determine whet...
 9.1.12: In 516, identify the variable in each experiment and determine whet...
 9.1.13: In 516, identify the variable in each experiment and determine whet...
 9.1.14: In 516, identify the variable in each experiment and determine whet...
 9.1.15: In 516, identify the variable in each experiment and determine whet...
 9.1.16: In 516, identify the variable in each experiment and determine whet...
 9.1.17: In 1722, list some possible ways to choose random samples for each ...
 9.1.18: In 1722, list some possible ways to choose random samples for each ...
 9.1.19: In 1722, list some possible ways to choose random samples for each ...
 9.1.20: In 1722, list some possible ways to choose random samples for each ...
 9.1.21: In 1722, list some possible ways to choose random samples for each ...
 9.1.22: In 1722, list some possible ways to choose random samples for each ...
Solutions for Chapter 9.1: Introduction to Statistics: Data and Sampling
Full solutions for Finite Mathematics, Binder Ready Version: An Applied Approach  11th Edition
ISBN: 9780470876398
Solutions for Chapter 9.1: Introduction to Statistics: Data and Sampling
Get Full SolutionsChapter 9.1: Introduction to Statistics: Data and Sampling includes 22 full stepbystep solutions. Since 22 problems in chapter 9.1: Introduction to Statistics: Data and Sampling have been answered, more than 17956 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Finite Mathematics, Binder Ready Version: An Applied Approach was written by and is associated to the ISBN: 9780470876398. This textbook survival guide was created for the textbook: Finite Mathematics, Binder Ready Version: An Applied Approach, edition: 11.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA1 yll2 = Y T(AAT)1 Y = 1 displayed by eigshow; axis lengths ad

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Fibonacci numbers
0,1,1,2,3,5, ... satisfy Fn = Fnl + Fn 2 = (A7 A~)I()q A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Reflection matrix (Householder) Q = I 2uuT.
Unit vector u is reflected to Qu = u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q1 = Q.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.