- Chapter 1: Introduction
- Chapter 10: Point Estimation
- Chapter 11: Interval Estimation
- Chapter 12: Hypothesis Testing
- Chapter 13: Tests of Hypothesis Involving Means, Variances, and Proportions
- Chapter 14: Regression and Correlation
- Chapter 15: Sums and Products
- Chapter 2: Probability
- Chapter 3: Probability Distributions and Probability Densities
- Chapter 4: Mathematical Expectation
- Chapter 5: Special Probability Distributions
- Chapter 6: Special Probability Densities
- Chapter 7: Functions of Random Variables
- Chapter 8: Sampling Distributions
- Chapter 9: Decision Theory
Mathematical Statistics with Applications 8th Edition - Solutions by Chapter
Full solutions for Mathematical Statistics with Applications | 8th Edition
A formula used to determine the probability of the union of two (or more) events from the probabilities of the events and their intersection(s).
A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain
An effect that systematically distorts a statistical result or estimate, preventing it from representing the true quantity of interest.
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data
Completely randomized design (or experiment)
A type of experimental design in which the treatments or design factors are assigned to the experimental units in a random manner. In designed experiments, a completely randomized design results from running all of the treatment combinations in random order.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
If it is possible to write a probability statement of the form PL U ( ) ? ? ? ? = ?1 where L and U are functions of only the sample data and ? is a parameter, then the interval between L and U is called a conidence interval (or a 100 1( )% ? ? conidence interval). The interpretation is that a statement that the parameter ? lies in this interval will be true 100 1( )% ? ? of the times that such a statement is made
An estimator that converges in probability to the true value of the estimated parameter as the sample size increases.
A correction factor used to improve the approximation to binomial probabilities from a normal distribution.
A probability distribution for a continuous random variable.
A linear function of treatment means with coeficients that total zero. A contrast is a summary of treatment means that is of interest in an experiment.
See Control chart.
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Defect concentration diagram
A quality tool that graphically shows the location of defects on a part or in a process.
W. Edwards Deming (1900–1993) was a leader in the use of statistical quality control.
Erlang random variable
A continuous random variable that is the sum of a ixed number of independent, exponential random variables.
Error of estimation
The difference between an estimated value and the true value.
An analysis of how the variance of the random variable that represents that output of a system depends on the variances of the inputs. A formula exists when the output is a linear function of the inputs and the formula is simpliied if the inputs are assumed to be independent.
In statistical quality control, that portion of a number of units or the output of a process that is defective.