×
×

# Solutions for Chapter 5.4: Distributions of Functions of Random Variables

## Full solutions for Probability and Statistical Inference | 9th Edition

ISBN: 9780321923271

Solutions for Chapter 5.4: Distributions of Functions of Random Variables

Solutions for Chapter 5.4
4 5 0 389 Reviews
13
0
##### ISBN: 9780321923271

Probability and Statistical Inference was written by and is associated to the ISBN: 9780321923271. Since 40 problems in chapter 5.4: Distributions of Functions of Random Variables have been answered, more than 81572 students have viewed full step-by-step solutions from this chapter. This textbook survival guide was created for the textbook: Probability and Statistical Inference , edition: 9. Chapter 5.4: Distributions of Functions of Random Variables includes 40 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions.

Key Statistics Terms and definitions covered in this textbook
• `-error (or `-risk)

In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).

• Axioms of probability

A set of rules that probabilities deined on a sample space must follow. See Probability

• Binomial random variable

A discrete random variable that equals the number of successes in a ixed number of Bernoulli trials.

• C chart

An attribute control chart that plots the total number of defects per unit in a subgroup. Similar to a defects-per-unit or U chart.

• Central tendency

The tendency of data to cluster around some value. Central tendency is usually expressed by a measure of location such as the mean, median, or mode.

• Chi-square test

Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data

• Conditional probability

The probability of an event given that the random experiment produces an outcome in another event.

• Contour plot

A two-dimensional graphic used for a bivariate probability density function that displays curves for which the probability density function is constant.

• Cook’s distance

In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.

• Covariance

A measure of association between two random variables obtained as the expected value of the product of the two random variables around their means; that is, Cov(X Y, ) [( )( )] =? ? E X Y ? ? X Y .

• Covariance matrix

A square matrix that contains the variances and covariances among a set of random variables, say, X1 , X X 2 k , , … . The main diagonal elements of the matrix are the variances of the random variables and the off-diagonal elements are the covariances between Xi and Xj . Also called the variance-covariance matrix. When the random variables are standardized to have unit variances, the covariance matrix becomes the correlation matrix.

• Crossed factors

Another name for factors that are arranged in a factorial experiment.

• Cumulative distribution function

For a random variable X, the function of X deined as PX x ( ) ? that is used to specify the probability distribution.

• Cumulative normal distribution function

The cumulative distribution of the standard normal distribution, often denoted as ?( ) x and tabulated in Appendix Table II.

• Defects-per-unit control chart

See U chart

• Erlang random variable

A continuous random variable that is the sum of a ixed number of independent, exponential random variables.

• Error mean square

The error sum of squares divided by its number of degrees of freedom.

• Expected value

The expected value of a random variable X is its long-term average or mean value. In the continuous case, the expected value of X is E X xf x dx ( ) = ?? ( ) ? ? where f ( ) x is the density function of the random variable X.

• Fixed factor (or fixed effect).

In analysis of variance, a factor or effect is considered ixed if all the levels of interest for that factor are included in the experiment. Conclusions are then valid about this set of levels only, although when the factor is quantitative, it is customary to it a model to the data for interpolating between these levels.

• Harmonic mean

The harmonic mean of a set of data values is the reciprocal of the arithmetic mean of the reciprocals of the data values; that is, h n x i n i = ? ? ? ? ? = ? ? 1 1 1 1 g .

×