- Chapter 1: Probability
- Chapter 10: Summarizing Data
- Chapter 11: Comparing Two Samples
- Chapter 12: The Analysis of Variance
- Chapter 13: The Analysis of Categorical Data
- Chapter 14: Linear Least Squares
- Chapter 2: Random Variables
- Chapter 3: Joint Distributions
- Chapter 4: Expected Values
- Chapter 5: Limit Theorems
- Chapter 6: Distributions Derived from the Normal Distribution
- Chapter 7: Survey Sampling
- Chapter 8: Estimation of Parameters and Fitting of Probability Distributions
- Chapter 9: Testing Hypotheses and Assessing Goodness of Fit
Mathematical Statistics and Data Analysis 3rd Edition - Solutions by Chapter
Full solutions for Mathematical Statistics and Data Analysis | 3rd Edition
a-error (or a-risk)
In hypothesis testing, an error incurred by failing to reject a null hypothesis when it is actually false (also called a type II error).
A study in which a sample from a population is used to make inference to a future population. Stability needs to be assumed. See Enumerative study
Asymptotic relative eficiency (ARE)
Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.
Attribute control chart
Any control chart for a discrete random variable. See Variables control chart.
A distribution with two modes
Bivariate normal distribution
The joint distribution of two normal random variables
Conditional probability distribution
The distribution of a random variable given that the random experiment produces an outcome in an event. The given event might specify values for one or more other random variables
The variance of the conditional probability distribution of a random variable.
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.
If it is possible to write a probability statement of the form PL U ( ) ? ? ? ? = ?1 where L and U are functions of only the sample data and ? is a parameter, then the interval between L and U is called a conidence interval (or a 100 1( )% ? ? conidence interval). The interpretation is that a statement that the parameter ? lies in this interval will be true 100 1( )% ? ? of the times that such a statement is made
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Cumulative distribution function
For a random variable X, the function of X deined as PX x ( ) ? that is used to specify the probability distribution.
An expression sometimes used for nonlinear regression models or polynomial regression models.
Defect concentration diagram
A quality tool that graphically shows the location of defects on a part or in a process.
Another name for a probability density function
A matrix that provides the tests that are to be conducted in an experiment.
Another name for a cumulative distribution function.
Error of estimation
The difference between an estimated value and the true value.
A function used in the probability density function of a gamma random variable that can be considered to extend factorials
Geometric random variable
A discrete random variable that is the number of Bernoulli trials until a success occurs.