- 2-1.1: List five reasons for organizing data into a frequency distribution.
- 2-1.2: Name the three types of frequency distributions, and explain when e...
- 2-1.3: Find the class boundaries, midpoints, and widths for each class. a....
- 2-1.4: How many classes should frequency distributions have? Why should th...
- 2-1.5: Shown here are four frequency distributions. Each is incorrectly co...
- 2-1.6: What are open-ended frequency distributions? Why are they necessary?
- 2-1.7: Trust in Internet Information A survey was taken on how much trust ...
- 2-1.8: Grams per Food Serving The data shown are the number of grams per s...
- 2-1.9: Weights of the NBAs Top 50 Players Listed are the weights of the NB...
- 2-1.10: Stories in the Worlds Tallest Buildings The number of stories in ea...
- 2-1.11: GRE Scores at Top-Ranked Engineering Schools The average quantitati...
- 2-1.12: Airline Passengers The number of passengers (in thousands) for the ...
- 2-1.13: Ages of Declaration of Independence Signers The ages of the signers...
- 2-1.14: Unclaimed Expired Prizes The number of unclaimed expired prizes (in...
- 2-1.15: Presidential Vetoes The number of total vetoes exercised by the pas...
- 2-1.16: Salaries of College Coaches The data are the salaries (in hundred t...
- 2-1.17: NFL Payrolls The data show the NFL team payrolls (in millions of do...
- 2-1.18: State Gasoline Tax The state gas tax in cents per gallon for 25 sta...
- 2-1.19: JFK Assassination A researcher conducted a survey asking people if ...
Solutions for Chapter 2-1: Organizing Data
Full solutions for Elementary Statistics: A Step by Step Approach 8th ed. | 8th Edition
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
The arithmetic mean of a set of numbers x1 , x2 ,…, xn is their sum divided by the number of observations, or ( / )1 1 n xi t n ? = . The arithmetic mean is usually denoted by x , and is often called the average
Asymptotic relative eficiency (ARE)
Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.
Attribute control chart
Any control chart for a discrete random variable. See Variables control chart.
Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.
A distribution with two modes
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
The mean of the conditional probability distribution of a random variable.
The variance of the conditional probability distribution of a random variable.
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.
An estimator that converges in probability to the true value of the estimated parameter as the sample size increases.
A two-dimensional graphic used for a bivariate probability density function that displays curves for which the probability density function is constant.
A linear function of treatment means with coeficients that total zero. A contrast is a summary of treatment means that is of interest in an experiment.
See Control chart.
A method to derive the probability density function of the sum of two independent random variables from an integral (or sum) of probability density (or mass) functions.
A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the off-diagonal elements rij are the correlations between Xi and Xj .
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Another name for the normal distribution, based on the strong connection of Karl F. Gauss to the normal distribution; often used in physics and electrical engineering applications
A function that is used to determine properties of the probability distribution of a random variable. See Moment-generating function