- 5.6.1: In a study comparing various methods of gold plating, 7 printed cir...
- 5.6.2: Five specimens of untreated wastewater produced at a gas field had ...
- 5.6.3: In an experiment involving the breaking strength of a certain type ...
- 5.6.4: A new post-surgical treatment is being compared with a standard tre...
- 5.6.5: The article Differences in Susceptibilities of Different Cell Lines...
- 5.6.6: The article Tibiofemoral Cartilage Thickness Distribution and its C...
- 5.6.7: During the spring of 1999, many fuel storage facilities in Serbia w...
- 5.6.8: The article Dynamics of Insulin Action in Hypertension: Assessment ...
- 5.6.9: The article Toward a Lifespan Metric of Reading Fluency (S. Wallot ...
- 5.6.10: Eight independent measurements were taken of the dissolution rate o...
- 5.6.11: Measurements of the sodium content in samples of two brands of choc...
- 5.6.12: The article Permeability, Diffusion and Solubility of Gases (B. Fla...
- 5.6.13: A computer system administrator notices that computers running a pa...
- 5.6.14: In the article Bactericidal Properties of Flat Surfaces and Nanopar...
- 5.6.15: The article Effects of Waste Glass Additions on the Properties and ...
Solutions for Chapter 5.6: Small-Sample Confidence Intervals for the Difference Between Two Means
Full solutions for Statistics for Engineers and Scientists | 4th Edition
Solutions for Chapter 5.6: Small-Sample Confidence Intervals for the Difference Between Two MeansGet Full Solutions
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
Asymptotic relative eficiency (ARE)
Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.
See Arithmetic mean.
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
A tabular arrangement expressing the assignment of members of a data set according to two or more categories or classiication criteria
A dimensionless measure of the linear association between two variables, usually lying in the interval from ?1 to +1, with zero indicating the absence of correlation (but not necessarily the independence of the two variables).
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Degrees of freedom.
The number of independent comparisons that can be made among the elements of a sample. The term is analogous to the number of degrees of freedom for an object in a dynamic system, which is the number of independent coordinates required to determine the motion of the object.
Another name for a probability density function
Discrete uniform random variable
A discrete random variable with a inite range and constant probability mass function.
The amount of variability exhibited by data
A model to relate a response to one or more regressors or factors that is developed from data obtained from the system.
Error of estimation
The difference between an estimated value and the true value.
The variance of an error term or component in a model.
Estimate (or point estimate)
The numerical value of a point estimator.
Extra sum of squares method
A method used in regression analysis to conduct a hypothesis test for the additional contribution of one or more variables to a model.
Finite population correction factor
A term in the formula for the variance of a hypergeometric random variable.
In statistical quality control, that portion of a number of units or the output of a process that is defective.
Fraction defective control chart
See P chart