BNAD277 Study Guide Test 3
BNAD277 Study Guide Test 3 BNAD277
Popular in Business Statistics
verified elite notetaker
Popular in Business
This 14 page Study Guide was uploaded by Kristin Koelewyn on Tuesday April 26, 2016. The Study Guide belongs to BNAD277 at University of Arizona taught by Dr. S. Umashankar in Spring 2016. Since its upload, it has received 159 views. For similar materials see Business Statistics in Business at University of Arizona.
Reviews for BNAD277 Study Guide Test 3
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 04/26/16
BNAD277 Test #3 Study Guide: Bnad277: Chapter 13a Notes Experimental Design and Analysis of Variance - An Introduction to Experimental Design and Analysis of Variance o Statistical studies can be classified as being either experimental or observational. o In an experimental study, one or more factors are controlled so that data can be obtained about how the factors influence the variables of interest. o In an observational study, no attempt is made to control the factors. o Cause-and-effect relationships are easier to establish in experimental studies than in observational studies. o Analysis of variance (ANOVA) can be used to analyze the data obtained from experimental or observational studies. o In this chapter three types of experimental designs are introduced. ▯ a completely randomized design ▯ a randomized block design ▯ a factorial experiment o A factor is a variable that the experimenter has selected for investigation. o A treatment is a level of a factor. o Experimental units are the objects of interest in the experiment. o A completely randomized design is an experimental design in which the treatments are randomly assigned to the experimental units. o o ▯ For each population, the response (dependent) variable is normally distributed. ▯ The variance of the response variable, denoted s is the same for all of the populations. ▯ The observations must be independent. - Analysis of Variance and the Completely Randomized Design o Between-Treatments Estimate 2f Population Variance ▯ The estimate of s based on the variation of the sample means is called the mean square due to treatments and is denoted by MSTR. ▯ o Within-Treatments Estimate of Population Variance ▯ The estimate of s based on the variation of the sample observations within each sample is called the mean square error and is denoted by MSE. ▯ o Comparing the Variance Estimates: The F Test ▯ If the null hypothesis is true and the ANOVA assumptions are valid, the sampling distribution of MSTR/MSE is an F distribution with MSTR d.f. equal to k - 1 and MSE d.f. equal to nT- k. ▯ If the means of the k populations are not equal, the value of MSTR/MSE will be inflated because MSTR overestimates s 2. o ANOVA Table ▯ With the entire data set as one sample, the formula for computing the total sum of squares, SST, is: o Test for the Equality of k Population Means ▯ Rejection Rule: ▯ P-value approach: Reject Ho id p-value </ s ▯ Critical Value approach: Reject Ho if F>/Fs - Multiple Comparison Procedures o Suppose that analysis of variance has provided statistical evidence to reject the null hypothesis of equal population means. o Fisher’s least significant difference (LSD) procedure can be used to determine where the differences occur. - Fisher’s LSD Procedure o Hypotheses: o Test Statistic: o Rejection Rule: ▯ P-value approach: Reject Ho is p-value </s ▯ Critical Value approach: Reject H0 if t<-t(a/2) 0r t>t(a/2) - Fisher’s LSD Procedure Based on the Test Statistic x bar i – x bar j o Hypotheses: o Test Statistic: o Rejection Rule: - Type 1 Error Rates: o The comparison-wise Type I error rate a indicates the level of significance associated with a single pairwise comparison. o The experiment-wise Type I error rate a EW is the probability of making a Type I error on at least one of the (k – 1)! pairwise comparisons. ▯ o The experiment-wise Type I error rate gets larger for problems with more populations (larger k). Bnad277: Chapter 13b Notes Experimental Design and Analysis of Variance - Randomized Block Design o Experimental units are the objects of interest in the experiment. o A completely randomized design is an experimental design in which the treatments are randomly assigned to the experimental units. o If the experimental units are heterogeneous, blocking can be used to form homogeneous groups, resulting in a randomized block design. o ANOVA Procedure: ▯ For a randomized block design the sum of squares total (SST) is partitioned into three groups: sum of squares due to treatments, sum of squares due to blocks, and sum of squares due to error. ▯ The total degrees of freedom, n T 1, are partitioned such that k - 1 degrees of freedom go to treatments, b - 1 go to blocks, and (k - 1)(b - 1) go to the error term. ▯ Example: Crescent Oil has developed three new blends of gasoline and must decide which blend or blends to produce and distribute. A study of the miles per gallon ratings of the three blends is being conducted to determine if the mean ratings are the same for the three blends. ▯ Five automobiles have been tested using each of the three gasoline blends and the miles per gallon ratings are shown on the next slide. ▯ Mean Square Due to Treatments: • The overall sample mean is 29. Thus, SSTR = 5[(29.8 - 29) + (28.8 - 29) + (28.4 - 29) ] = 5.2 • MSTR = 5.2/(3 - 1) = 2.6 ▯ Mean Square Due to Blocks: 2 2 • SSBL = 3[(30.333 - 29) + . . . + (25.667 - 29) ] = 51.33 • MSBL = 51.33/(5 - 1) = 12.8 ▯ Mean Square Due to Error: • SSE = 62 - 5.2 - 51.33 = 5.47 • MSE = 5.47/[(3 - 1)(5 - 1)] = .68 ▯ ANOVA Table: • ▯ Rejection Rule: • p-Value Approach: Reject H if0p-value < .05 • Critical Value Approach: Reject H i0 F > 4.46 ▯ Test Statistic: • F = MSTR/MSE = 2.6/.68 = 3.82 ▯ Conclusion: • The p-value is greater than .05 (where F = 4.46) and less than .10 (where F = 3.11). (Excel provides a p- value of .07). Therefore, we cannot reject H .0 • There is insufficient evidence to conclude that the miles per gallon ratings differ for the three gasoline blends. - Factorial Experiment o In some experiments we want to draw conclusions about more than one variable or factor. o Factorial experiments and their corresponding ANOVA computations are valuable designs when simultaneous conclusions about two or more factors are required. o The term factorial is used because the experimental conditions include all possible combinations of the factors. o For example, for a levels of factor A and b levels of factor B, the experiment will involve collecting data on ab treatment combinations. - Two-Factor Factorial Experiement: o ANOVA Procedure ▯ The ANOVA procedure for the two-factor factorial experiment is similar to the completely randomized experiment and the randomized block experiment. ▯ We again partition the sum of squares total (SST) into its sources. ▯ The total degrees of freedom, n T 1, are partitioned such that (a – 1) d.f go to Factor A, (b – 1) d.f go to Factor B, (a – 1)(b – 1) d.f. go to Interaction, and ab(r – 1) go to Error. o Step 1: Compute the total sum of squares: ▯ o Step 2: Compute the sum of squares for factor A: o Step 3: Compute the sum of squares for factor B: o Step 4: Compute the sum of squares for interaction: o Step 5: Compute the sum of squares due to error: - Example: A survey was conducted of hourly wages for a sample of workers in two industries at three locations in Ohio. Part of the purpose of the survey was to determine if differences exist in both industry type and location. The sample data are shown on the next slide. o Factors: ▯ Factor A: Industry Type (2 levels) ▯ Factor B: Location (3 levels) o Replications: ▯ Each experimental condition is repeated 3 times o ANOVA Table: o Conclusions Using the Critical Value Approach: ▯ Industries: • F = 4.19 < F a 4.75 ▯ Locations: • F = 4.69 > F a 3.89 ▯ Interaction: • F = 1.55 < F a 3.89 Bnad277: Chapter 14a Notes Simple Linear Regression - Managerial decisions often are based on the relationship between two or more variables. - Regression analysis can be used to develop an equation showing how the variables are related. - The variable being predicted is called the dependent variable and is denoted by y. - The variables being used to predict the value of the dependent variable are called the independent variables and are denoted by x. - Simple linear regression involves one independent variable and one dependent variable. - The relationship between the two variables is approximated by a straight line. - Regression analysis involving two or more independent variables is called multiple regression. - Simple Linear Regression Model: o The equation that describes how y is related to x and an error term is called the regression model. o The simple linear regression model is: ▯ where b a0d b ar1 called parameters of the model, e is a random variable called the error term. o The simple linear regression equation is: o Positive Linear Relationship: o Negative Linear Relationship: o No Relationship: - Estimated Simple Linear Regression Equation: - Estimation Process: o - Least Squares Method: o Least Squares Criterion o Slope for the Estimated Regression Equation o y-Intercept for the Estimated Regression Equation: - Simple Linear Regression: o Example: Reed Auto periodically has a special week-long sale. As part of the advertising campaign Reed runs one or more television commercials during the weekend preceding the sale. Data from a sample of 5 previous sales are shown on the next slide. - Estimated Regression Equation: o Slope for the Estimated Regression Equation o y-Intercept for the Estimated Regression Equation o Estimated Regression Equation - Coefficient of Determination: o Relationship Among SST, SSR, SSE o The coefficient of determination is: - Sample Correlation Coefficient: - Assumptions About the Error Term e - Testing for Significance: o An Estimate of s 2 ▯ The mean square error (MSE) provides the estimate of s , 2 2 and the notation s is also used. o An Estimate of s - Testing for Significance: t Test o Hypothesis: o Test Statistic: o Rejection: o Step 1: Determine the Hypothesis o Step 2: Specify the level of significance o Step 3: Select the test Statistic: o Step 4: State the Rejection Rule o Step 5: Compute the value of the test statistic o Step 6: Determine whether to reject Ho - Confidence Interval b 1 o We can use a 95% confidence interval for b to test the hypotheses 1 just used in the t test. o H i0 rejected if the hypothesized value of b is no1 included in the confidence interval for b . 1 o The form of a confidence interval for b is: 1 ▯ where t a/2 is the t value providing an area of a/2 in the upper tail of a t distribution with n - 2 degrees of freedom o Rejection Rule: Reject H if 00is not included in the confidence interval for b 1 o 95% Confidence Interval for b 1 o Conclusion: 0 is not included in the confidence interval. Reject H 0 - Testing for Significance: F Test o Hypothesis: ▯ o Test Statistic: o Rejection Rule: - Some Cautions about the Interpretation of Significance Tests: o Rejecting H : b = 0 and concluding that the relationship between x 0 1 and y is significant does not enable us to conclude that a cause- and-effect relationship is present between x and y. o Just because we are able to reject H : b = 0 and de0ons1rate statistical significance does not enable us to conclude that there is a linear relationship between x and y.
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'