- 5.1.1: Consider a fault-tolerant multiprocessor computer system with two p...
- 5.1.2: Consider again the problem of 1M (1-megabyte) RAM chips supplied by...
- 5.1.3: Consider the operation of an online file updating system [MEND 1979...
- 5.1.4: X1 and X2 are independent random variables with Poisson distributio...
- 5.1.5: Let the execution times X and Y of two independent parallel process...
Solutions for Chapter 5.1: Introduction
Full solutions for Probability and Statistics with Reliability, Queuing, and Computer Science Applications | 2nd Edition
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
In hypothesis testing, a region in the sample space of the test statistic such that if the test statistic falls within it, the null hypothesis cannot be rejected. This terminology is used because rejection of H0 is always a strong conclusion and acceptance of H0 is generally a weak conclusion
Additivity property of x 2
If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.
An effect that systematically distorts a statistical result or estimate, preventing it from representing the true quantity of interest.
A distribution with two modes
Box plot (or box and whisker plot)
A graphical display of data in which the box contains the middle 50% of the data (the interquartile range) with the median dividing it, and the whiskers extend to the smallest and largest values (or some deined lower and upper limits).
Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data
The mean of the conditional probability distribution of a random variable.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
The variance of the conditional probability distribution of a random variable.
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.
The probability 1?a associated with a conidence interval expressing the probability that the stated interval will contain the true parameter value.
Another term for the conidence coeficient.
In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.
Deming’s 14 points.
A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality
The response variable in regression or a designed experiment.
The variance of an error term or component in a model.
Fisher’s least signiicant difference (LSD) method
A series of pair-wise hypothesis tests of treatment means in an experiment to determine which means differ.
A function used in the probability density function of a gamma random variable that can be considered to extend factorials