- 10.5.1: Let X be the height of a man and Y the height of his daughter (both...
- 10.5.2: The joint probability density function of X and Y is bivariate norm...
- 10.5.3: Let the joint probability density function of X and Y be bivariate ...
- 10.5.4: Let f (x, y) be a joint bivariate normal probability density functi...
- 10.5.5: Let the joint probability density function of two random variables ...
- 10.5.6: Let Z and W be independent standard normal random variables. Let X ...
- 10.5.7: Let the joint probability density function of random variables X an...
Solutions for Chapter 10.5: Bivariate Normal Distribution
Full solutions for Fundamentals of Probability, with Stochastic Processes | 3rd Edition
2 k factorial experiment.
A full factorial experiment with k factors and all factors tested at only two levels (settings) each.
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
In hypothesis testing, a region in the sample space of the test statistic such that if the test statistic falls within it, the null hypothesis cannot be rejected. This terminology is used because rejection of H0 is always a strong conclusion and acceptance of H0 is generally a weak conclusion
A formula used to determine the probability of the union of two (or more) events from the probabilities of the events and their intersection(s).
In statistical hypothesis testing, this is a hypothesis other than the one that is being tested. The alternative hypothesis contains feasible conditions, whereas the null hypothesis speciies conditions that are under test
A study in which a sample from a population is used to make inference to a future population. Stability needs to be assumed. See Enumerative study
Central composite design (CCD)
A second-order response surface design in k variables consisting of a two-level factorial, 2k axial runs, and one or more center points. The two-level factorial portion of a CCD can be a fractional factorial design when k is large. The CCD is the most widely used design for itting a second-order model.
The probability of an event given that the random experiment produces an outcome in another event.
Conditional probability density function
The probability density function of the conditional probability distribution of a continuous random variable.
If it is possible to write a probability statement of the form PL U ( ) ? ? ? ? = ?1 where L and U are functions of only the sample data and ? is a parameter, then the interval between L and U is called a conidence interval (or a 100 1( )% ? ? conidence interval). The interpretation is that a statement that the parameter ? lies in this interval will be true 100 1( )% ? ? of the times that such a statement is made
Continuous random variable.
A random variable with an interval (either inite or ininite) of real numbers for its range.
In the most general usage, a measure of the interdependence among data. The concept may include more than two variables. The term is most commonly used in a narrow sense to express the relationship between quantitative variables or ranks.
A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the off-diagonal elements rij are the correlations between Xi and Xj .
Discrete random variable
A random variable with a inite (or countably ininite) range.
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
The distribution of the random variable deined as the ratio of two independent chi-square random variables, each divided by its number of degrees of freedom.
Fisher’s least signiicant difference (LSD) method
A series of pair-wise hypothesis tests of treatment means in an experiment to determine which means differ.
A method of variable selection in regression, where variables are inserted one at a time into the model until no other variables that contribute signiicantly to the model can be found.
Fractional factorial experiment
A type of factorial experiment in which not all possible treatment combinations are run. This is usually done to reduce the size of an experiment with several factors.