- Chapter Chapter 1: What Is Statistics?
- Chapter Chapter 10: One-Sample Tests of Hypothesis
- Chapter Chapter 11: Two-Sample Tests of Hypothesis
- Chapter Chapter 12: Analysis of Variance
- Chapter Chapter 13: Correlation and Linear Regression
- Chapter Chapter 14: Multiple Regression Analysis
- Chapter Chapter 15: Index Numbers
- Chapter Chapter 16: Time Series and Forecasting
- Chapter Chapter 17: Nonparametric Methods: Goodness-of-Fit Tests
- Chapter Chapter 18: Nonparametric Methods: Analysis of Ranked Data
- Chapter Chapter 19: Statistical Process Control and Quality Management
- Chapter Chapter 2: Describing Data: Frequency Tables, Frequency Distributions, and Graphic Presentation
- Chapter Chapter 20: An Introduction to Decision Theory
- Chapter Chapter 3: Describing Data: Numerical Measures
- Chapter Chapter 4: Describing Data: Displaying and Exploring Data
- Chapter Chapter 5: A Survey of Probability Concepts
- Chapter Chapter 6: Discrete Probability Distributions
- Chapter Chapter 7: Continuous Probability Distributions
- Chapter Chapter 8: Sampling Methods and the Central Limit Theorem
- Chapter Chapter 9: Estimation and Confidence Intervals
Statistical Techniques in Business and Economics 15th Edition - Solutions by Chapter
Full solutions for Statistical Techniques in Business and Economics | 15th Edition
Statistical Techniques in Business and Economics | 15th Edition - Solutions by ChapterGet Full Solutions
A formula used to determine the probability of the union of two (or more) events from the probabilities of the events and their intersection(s).
Asymptotic relative eficiency (ARE)
Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.
A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain
Box plot (or box and whisker plot)
A graphical display of data in which the box contains the middle 50% of the data (the interquartile range) with the median dividing it, and the whiskers extend to the smallest and largest values (or some deined lower and upper limits).
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
A linear function of treatment means with coeficients that total zero. A contrast is a summary of treatment means that is of interest in an experiment.
A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the off-diagonal elements rij are the correlations between Xi and Xj .
A measure of association between two random variables obtained as the expected value of the product of the two random variables around their means; that is, Cov(X Y, ) [( )( )] =? ? E X Y ? ? X Y .
Defects-per-unit control chart
See U chart
An experiment in which the tests are planned in advance and the plans usually incorporate statistical models. See Experiment
A study in which a sample from a population is used to make inference to the population. See Analytic study
Erlang random variable
A continuous random variable that is the sum of a ixed number of independent, exponential random variables.
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
Extra sum of squares method
A method used in regression analysis to conduct a hypothesis test for the additional contribution of one or more variables to a model.
A model that contains only irstorder terms. For example, the irst-order response surface model in two variables is y xx = + ?? ? ? 0 11 2 2 + + . A irst-order model is also called a main effects model
Fixed factor (or fixed effect).
In analysis of variance, a factor or effect is considered ixed if all the levels of interest for that factor are included in the experiment. Conclusions are then valid about this set of levels only, although when the factor is quantitative, it is customary to it a model to the data for interpolating between these levels.
Fraction defective control chart
See P chart
Fractional factorial experiment
A type of factorial experiment in which not all possible treatment combinations are run. This is usually done to reduce the size of an experiment with several factors.
An arrangement of the frequencies of observations in a sample or population according to the values that the observations take on
Having trouble accessing your account? Let us help you, contact support at +1(510) 944-1054 or email@example.com
Forgot password? Reset it here