 9.1: Alice models the time that she spends each week on homework as an e...
 9.2: Consider a sequence of independent coin tosses, and let 0 be the pr...
 9.3: Sampling and estimation of sums. We have a box with k balls; k of t...
 9.4: Mixture models. Let the PDF of a random variable X be the mixture o...
 9.5: Unstable particles are emitted from a source and decay at a distanc...
 9.6: Consider a study of student heights in a middle school. Assume that...
 9.7: Estimating the parameter of a Poisson random variable. Derive the M...
 9.8: Estimating the parameter of a uniform random variable I. We are giv...
 9.9: Estimating the parameter of a uniform random variable II. We are gi...
 9.10: A source emits a random number of photons K each time that it is tr...
 9.11: Sufficient statistics  factorization criterion. Consider an observ...
 9.12: Examples of a sufficient statistic I. Show that q(X) = El Xi is a s...
 9.13: Examples of a sufficient statistic II. Let X I , . . . , X n be i.i...
 9.14: RaoBlackwell theorem. This problem shows that a general estimator ...
 9.15: Let Xl , ... , Xn be i.i.d. random variables that are uniformly dis...
 9.16: An electric utility company tries to estimate the relation between ...
 9.17: Given the five data pairs (Xi, Yi) in the table below, X Y 0.798 2...
 9.18: Unbiased ness and consistency in linear regression. In a probabilis...
 9.19: Variance estimate in linear regression. Under the same assumptions ...
 9.20: A random variable X is characterized by a normal PDF with mean 10 ...
 9.21: A normal random variable X is known to have a mean of 60 and a stan...
 9.22: There are two hypotheses about the probability of heads for a given...
 9.23: The number of phone cans received by a ticket agency on any one day...
 9.24: We have received a shipment of light bulbs whose lifetimes are mode...
 9.25: Let X be a normal random variable with mean tt and unit variance. ...
 9.26: We have five observations drawn independently from a normal distrib...
 9.27: A plant grows on two distant islands. Suppose that its life span (m...
 9.28: A company considers buying a machine to manufacture a certain item....
 9.29: The values of five independent samples of a Poisson random variable...
 9.30: A surveillance camcra periodically checks a certain area and record...
Solutions for Chapter 9: Classical Statistical Inference
Full solutions for Introduction to Probability,  2nd Edition
ISBN: 9781886529236
Solutions for Chapter 9: Classical Statistical Inference
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Since 30 problems in chapter 9: Classical Statistical Inference have been answered, more than 8529 students have viewed full stepbystep solutions from this chapter. Introduction to Probability, was written by and is associated to the ISBN: 9781886529236. Chapter 9: Classical Statistical Inference includes 30 full stepbystep solutions. This textbook survival guide was created for the textbook: Introduction to Probability,, edition: 2.

2 k p  factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each

Addition rule
A formula used to determine the probability of the union of two (or more) events from the probabilities of the events and their intersection(s).

Adjusted R 2
A variation of the R 2 statistic that compensates for the number of parameters in a regression model. Essentially, the adjustment is a penalty for increasing the number of parameters in the model. Alias. In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.

Alternative hypothesis
In statistical hypothesis testing, this is a hypothesis other than the one that is being tested. The alternative hypothesis contains feasible conditions, whereas the null hypothesis speciies conditions that are under test

Average
See Arithmetic mean.

Bayesâ€™ theorem
An equation for a conditional probability such as PA B (  ) in terms of the reverse conditional probability PB A (  ).

Biased estimator
Unbiased estimator.

Coeficient of determination
See R 2 .

Comparative experiment
An experiment in which the treatments (experimental conditions) that are to be studied are included in the experiment. The data from the experiment are used to evaluate the treatments.

Completely randomized design (or experiment)
A type of experimental design in which the treatments or design factors are assigned to the experimental units in a random manner. In designed experiments, a completely randomized design results from running all of the treatment combinations in random order.

Conidence level
Another term for the conidence coeficient.

Continuous distribution
A probability distribution for a continuous random variable.

Contour plot
A twodimensional graphic used for a bivariate probability density function that displays curves for which the probability density function is constant.

Correction factor
A term used for the quantity ( / )( ) 1 1 2 n xi i n ? = that is subtracted from xi i n 2 ? =1 to give the corrected sum of squares deined as (/ ) ( ) 1 1 2 n xx i x i n ? = i ? . The correction factor can also be written as nx 2 .

Critical region
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.

Cumulative distribution function
For a random variable X, the function of X deined as PX x ( ) ? that is used to specify the probability distribution.

Dependent variable
The response variable in regression or a designed experiment.

Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).

Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a modelitting process and not on replication.

Experiment
A series of tests in which changes are made to the system under study