- 13-5.1: What questions are you trying to answer?
- 13-5.2: What nonparametric test would you use to find the answer?
- 13-5.3: What are the hypotheses?
- 13-5.4: Select a significance level and run the test. What is the H value?
- 13-5.5: What is your conclusion?
- 13-5.6: What is the corresponding parametric test?
- 13-5.7: What assumptions would you need to make to conduct this test?
- 13-5.1: Calories in Cereals Samples of four different cereals show the foll...
- 13-5.2: Self-Esteem and Birth Order A test to measure self-esteem is given ...
- 13-5.3: Lawnmower Costs A researcher wishes to compare the prices of three ...
- 13-5.4: Sodium Content of Microwave Dinners Three brands of microwave dinne...
- 13-5.5: Carbohydrates in Foods A nutritionist wishes to compare the number ...
- 13-5.6: Job Offers for Chemical Engineers A recent study recorded the numbe...
- 13-5.7: Expenditures for Pupils The expenditures in dollars per pupil for s...
- 13-5.8: Printer Costs An electronics store manager wishes to compare the co...
- 13-5.9: Number of Crimes per Week In a large city, the number of crimes per...
- 13-5.10: Amounts of Caffeine in Beverages The amounts of caffeine in a regul...
- 13-5.11: Maximum Speeds of Animals A human is said to be able to reach a max...
Solutions for Chapter 13-5: Nonparametric Statistics
Full solutions for Elementary Statistics: A Step by Step Approach | 7th Edition
See Arithmetic mean.
A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain
Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.
Continuous random variable.
A random variable with an interval (either inite or ininite) of real numbers for its range.
A term used for the quantity ( / )( ) 1 1 2 n xi i n ? = that is subtracted from xi i n 2 ? =1 to give the corrected sum of squares deined as (/ ) ( ) 1 1 2 n xx i x i n ? = i ? . The correction factor can also be written as nx 2 .
A square matrix that contains the variances and covariances among a set of random variables, say, X1 , X X 2 k , , … . The main diagonal elements of the matrix are the variances of the random variables and the off-diagonal elements are the covariances between Xi and Xj . Also called the variance-covariance matrix. When the random variables are standardized to have unit variances, the covariance matrix becomes the correlation matrix.
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Cumulative normal distribution function
The cumulative distribution of the standard normal distribution, often denoted as ?( ) x and tabulated in Appendix Table II.
Cumulative sum control chart (CUSUM)
A control chart in which the point plotted at time t is the sum of the measured deviations from target for all statistics up to time t
An expression sometimes used for nonlinear regression models or polynomial regression models.
A subset of effects in a fractional factorial design that deine the aliases in the design.
Discrete uniform random variable
A discrete random variable with a inite range and constant probability mass function.
The amount of variability exhibited by data
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
Another name for a cumulative distribution function.
A concept in parameter estimation that uses the variances of different estimators; essentially, an estimator is more eficient than another estimator if it has smaller variance. When estimators are biased, the concept requires modiication.
Error of estimation
The difference between an estimated value and the true value.
Fixed factor (or fixed effect).
In analysis of variance, a factor or effect is considered ixed if all the levels of interest for that factor are included in the experiment. Conclusions are then valid about this set of levels only, although when the factor is quantitative, it is customary to it a model to the data for interpolating between these levels.
The geometric mean of a set of n positive data values is the nth root of the product of the data values; that is, g x i n i n = ( ) = / w 1 1 .