- Chapter 1: Exploring Data
- Chapter 1.1: Analyzing Categorical Data
- Chapter 1.2: Displaying Quantitative Data with Graphs
- Chapter 1.3: Describing Quantitative Data with Numbers
- Chapter 10: Comparing Two Populations or Groups
- Chapter 10.1: Comparing Two Proportions
- Chapter 10.2: Comparing Two Means
- Chapter 11: Inference for Ditribution of Categorical Data
- Chapter 11.1: Chi-Square Tests for Goodness of Fit
- Chapter 11.2: Inference for Two-Way Tables
- Chapter 12: More About Regression
- Chapter 12.1: Inference for Linear Regression
- Chapter 12.2: Transforming to Achieve Linearity
- Chapter 2: Modeling Distributions of Data
- Chapter 2.1: Describing Location in a Distribution
- Chapter 2.2: Density Curves and Normal Distributions
- Chapter 3: Describing Relationships
- Chapter 3.1: Scatterplots and Correlation
- Chapter 3.2: Least-Squares Regression
- Chapter 4: Designing Studies
- Chapter 4.1: Sampling and Surveys
- Chapter 4.2: Experiments
- Chapter 4.3: Using Studies Wisely
- Chapter 5: Probability: What Are The Chances
- Chapter 5.1: Randomness, Probability, and Simulation
- Chapter 5.2: Probability Rules
- Chapter 5.3: Conditional Probability and Independence
- Chapter 6: Random Variables
- Chapter 6.1: Discrete and Continuous Random Variables
- Chapter 6.2: Transforming and Combining Random Variables
- Chapter 6.3: Binomial and Geometric Random Variables
- Chapter 7: Sampling Distributions
- Chapter 7.1: What Is a Sampling Distribution?
- Chapter 7.2: Sample Proportions
- Chapter 7.3: Sample Means
- Chapter 8: Estimating With Confidence
- Chapter 8.1: Confidence Intervals: The Basics
- Chapter 8.2: Estimating a Population Proportion
- Chapter 8.3: Estimating a Population Mean
- Chapter 9: Testing A Claim
- Chapter 9.1: Significance Tests: The Basics
- Chapter 9.2: Tests about a Population Proportion
- Chapter 9.3: Tests about a Population Mean
- Chapter Introduction: Data Analysis: Making Sense of Data
The Practice of Statistics 5th Edition - Solutions by Chapter
Full solutions for The Practice of Statistics | 5th Edition
All possible (subsets) regressions
A method of variable selection in regression that examines all possible subsets of the candidate regressor variables. Eficient computer algorithms have been developed for implementing all possible regressions
Attribute control chart
Any control chart for a discrete random variable. See Variables control chart.
A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain
An estimator for a parameter obtained from a Bayesian method that uses a prior distribution for the parameter along with the conditional distribution of the data given the parameter to obtain the posterior distribution of the parameter. The estimator is obtained from the posterior distribution.
An effect that systematically distorts a statistical result or estimate, preventing it from representing the true quantity of interest.
Another term for the conidence coeficient.
A correction factor used to improve the approximation to binomial probabilities from a normal distribution.
In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.
Cumulative normal distribution function
The cumulative distribution of the standard normal distribution, often denoted as ?( ) x and tabulated in Appendix Table II.
Degrees of freedom.
The number of independent comparisons that can be made among the elements of a sample. The term is analogous to the number of degrees of freedom for an object in a dynamic system, which is the number of independent coordinates required to determine the motion of the object.
Another name for a cumulative distribution function.
A study in which a sample from a population is used to make inference to the population. See Analytic study
Error mean square
The error sum of squares divided by its number of degrees of freedom.
A subset of a sample space.
Extra sum of squares method
A method used in regression analysis to conduct a hypothesis test for the additional contribution of one or more variables to a model.
A model that contains only irstorder terms. For example, the irst-order response surface model in two variables is y xx = + ?? ? ? 0 11 2 2 + + . A irst-order model is also called a main effects model
Fixed factor (or fixed effect).
In analysis of variance, a factor or effect is considered ixed if all the levels of interest for that factor are included in the experiment. Conclusions are then valid about this set of levels only, although when the factor is quantitative, it is customary to it a model to the data for interpolating between these levels.
An arrangement of the frequencies of observations in a sample or population according to the values that the observations take on
Geometric random variable
A discrete random variable that is the number of Bernoulli trials until a success occurs.
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or firstname.lastname@example.org
Forgot password? Reset it here