 9.5.1: The range, variance, and standard deviation measure the _____ of th...
 9.5.2: True or False A disadvantage of using the range to measure dispersi...
 9.5.3: The Empirical Rule states that if the distribution of the data is b...
 9.5.4: When calculating the sample standard deviation for n items, we divi...
 9.5.5: True or False The standard deviation is preferred to the variance b...
 9.5.6: True or False Chebychevs theorem is used to estimate the probabilit...
 9.5.7: In 712, compute the standard deviation for each set of sample data....
 9.5.8: In 712, compute the standard deviation for each set of sample data....
 9.5.9: In 712, compute the standard deviation for each set of sample data....
 9.5.10: In 712, compute the standard deviation for each set of sample data....
 9.5.11: In 712, compute the standard deviation for each set of sample data....
 9.5.12: In 712, compute the standard deviation for each set of sample data....
 9.5.13: In 13 and 14, calculate the mean and the standard deviation of the ...
 9.5.14: In 13 and 14, calculate the mean and the standard deviation of the ...
 9.5.15: Lightbulb Life A sample of 6 lightbulbs was chosen and their lifeti...
 9.5.16: Aptitude Scores A sample of 25 applicants for admission to Midweste...
 9.5.17: Baseball Players During 2010 spring training, the ages of the 40 me...
 9.5.18: Baseball Players During 2010 spring training, the ages of the 40 me...
 9.5.19: Mothers Age The following data give the number of births in the Uni...
 9.5.20: Charge Accounts A department store takes a sample of its customer c...
 9.5.21: Earthquakes The data below list the number of earthquakes recorded ...
 9.5.22: Earthquakes The data below list the number of earthquakes recorded ...
 9.5.23: Licensed Drivers in Tennessee Refer to the data in 13, Exercise 9.3...
 9.5.24: Licensed Drivers in Hawaii Refer to the data provided in 14, Exerci...
 9.5.25: Undergraduate Tuition 20062007 Refer to the data provided in 15, Ex...
 9.5.26: Birth Rates Refer to the data provided in 18, Exercise 9.3. (a) Are...
 9.5.27: The Empirical Rule One measure of intelligence is the StanfordBine...
 9.5.28: The Empirical Rule SAT Math scores have a bellshaped distribution ...
 9.5.29: The Empirical Rule The weight, in grams, of the pair of kidneys in ...
 9.5.30: . The Empirical Rule The distribution of the length of bolts has a ...
 9.5.31: Chebychevs Theorem Suppose that an experiment with numerical outcom...
 9.5.32: Cost of Meat A survey reveals that the mean price for a pound of be...
 9.5.33: Quality Control A watch company determines that each box of 500 wat...
 9.5.34: Sales The average sale at a department store is $51.25, with a stan...
 9.5.35: Annual Births The table gives the number of live births in the Unit...
 9.5.36: Fishing The number of salmon caught in each of two rivers over the ...
Solutions for Chapter 9.5: Measures of Dispersion
Full solutions for Finite Mathematics, Binder Ready Version: An Applied Approach  11th Edition
ISBN: 9780470876398
Solutions for Chapter 9.5: Measures of Dispersion
Get Full SolutionsChapter 9.5: Measures of Dispersion includes 36 full stepbystep solutions. Since 36 problems in chapter 9.5: Measures of Dispersion have been answered, more than 17976 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Finite Mathematics, Binder Ready Version: An Applied Approach was written by and is associated to the ISBN: 9780470876398. This textbook survival guide was created for the textbook: Finite Mathematics, Binder Ready Version: An Applied Approach, edition: 11.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Pascal matrix
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Singular matrix A.
A square matrix that has no inverse: det(A) = o.

Solvable system Ax = b.
The right side b is in the column space of A.

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.